- Short intro
- Key Features
- Installation
- Function Overview
- Examples of usage
- Available Tools
- Authors
- Contributing
- TODO
- References
- License
10x faster distance calculations via Numba/GPU | Unified API for metrics + distances | Cohen's d, FFT, Mahalanobis in one import
pip install dscience-tools
from ds_tools import DSTools
tools = DSTools()
tools.metrics.cohens_d(group_a, group_b) # numpy/numba/cupy auto-dispatch| Feature | scipy | sklearn | DSTools |
|---|---|---|---|
| GPU acceleration | ❌ | ❌ | ✅ |
| Auto backend dispatch | ❌ | ❌ | ✅ |
| Cohen's d | ❌ | ❌ | ✅ |
| FFT + metrics in one API | ❌ | ❌ | ✅ |
This toolkit is built on top of popular libraries like Pandas, Polars, Scikit-learn, Optuna, and Matplotlib, providing a higher-level API for common tasks in Exploratory Data Analysis (EDA), feature preprocessing, model evaluation, and synthetic data generation. It is designed for data scientists, analysts, and researchers who want to write cleaner, more efficient, and more reproducible code.
- Advanced Data Analysis: Get quick and detailed statistics for numerical and categorical columns.
- Powerful Visualizations: Generate insightful correlation matrices and confusion matrices with a single function call.
- Comprehensive Model Evaluation: Calculate a wide range of classification metrics and visualize performance curves effortlessly.
- Synthetic Data Generation: Create datasets with specific statistical properties (mean, median, std, skew, kurtosis) for robust testing and simulation. Create complex numerical distributions matching specific statistical moments (
generate_distribution,generate_distribution_from_metrics). - Efficient Preprocessing: Encode categorical variables, handle outliers, and create features from missing values.
- Utility Functions: A collection of helpers for stationarity testing, data validation, and file I/O operations.
- Data Exploration: Quickly get statistics for numerical and categorical features (
describe_numeric,describe_categorical), check for missing values (check_NINF), and visualize correlations (corr_matrix). - Model Evaluation: Comprehensive classification model evaluation (
evaluate_classification,compute_metrics) with clear visualizations (plot_confusion_matrix). - Data Preprocessing: Encode categorical variables (
labeling), handle outliers (remove_outliers_iqr), and scale features (min_max_scale). - Time Series Analysis: Test for stationarity using the Dickey-Fuller test (
test_stationarity). - Advanced Statistics: Calculate non-parametric correlation (
chatterjee_correlation), entropy, and KL-divergence. - Utilities: Save/load DataFrames to/from ZIP archives, generate random alphanumeric codes, and more.
See CHANGELOG.md.
This library is actively maintained and will be expanded to cover more aspects of the daily data science workflow. The focus remains on providing high-performance, easy-to-use tools for common and resource-intensive tasks.
Here is the development roadmap:
- Expand Core Modules:
- Add more loss and another functions and metrics to
tools.metrics(e.g. for classification, clusterization, etc.). - Implement more distance measures in
tools.distance(e.g., Levenshtein for strings, Silhouette, etc.).
- Add more loss and another functions and metrics to
- New
PreprocessingModule:- Develop high-performance feature scaling and encoding functions.
- Add utilities for handling time-series data.
- New
VisualizationModule:- Create simple wrappers around Matplotlib/Seaborn for common plots (e.g., feature distribution, ROC curves).
- Community & Contributions:
- Improve documentation with more examples.
- Create contribution guidelines (CONTRIBUTING.md).
Your feature requests and contributions are highly encouraged! Please open an issue to suggest a new function.
git clone https://github.com/s-kav/ds_tools.git
or
Install dscience-tools directly from PyPI:
pip install dscience-toolscd ds_tools
Ensure you have Python version 3.9+ or higher and install the required packages:
pip install -r requirements.txt
The library provides a wide range of functions. To see a full, formatted list of available tools, metrics and distances, you can use the appropriate list methods:
from ds_tools import DSTools
import pandas as pd
tools = DSTools()
print('\n', '='*30, ' Functions ', '='*30, '\n')
display(pd.DataFrame(tools.function_list()))
print('\n', '='*30, ' Distances ', '='*30, '\n')
display(tools.distance.list_distances().iloc[:, 0:2])
print('\n', '='*30, ' Metrics ', '='*30, '\n')
display(tools.metrics.list_metrics().iloc[:, 0:2])Here're some simple examples of how to use this library.
Calculate Mean Absolute Error and its gradient. The best backend (GPU/Numba/NumPy) is chosen automatically.
import numpy as np
y_true = np.array([1.0, 2.0, 3.0, 4.0])
y_pred = np.array([1.1, 2.2, 2.8, 4.3])
# Calculate only the loss value
mae_loss = tools.metrics.mae(y_true, y_pred)
print(f"MAE Loss: {mae_loss:.4f}")
# Calculate both loss and its gradient
loss, grad = tools.metrics.mae(y_true, y_pred, return_grad=True)
print(f"Gradient: {grad}")u = np.random.rand(4)
v = np.random.rand(4)
euc_dist = tools.distance.euclidean(u, v)
print(f"Euclidean Distance: {euc_dist:.4f}")# 1. Start monitoring
tools.metrics.start_monitoring()
# 2. Simulate training loop
for epoch in range(100):
# Dummy loss values
loss = 1 / (epoch + 1)
val_loss = 1.2 / (epoch + 1) + np.random.rand() * 0.1
# Update history at the end of each epoch
tools.metrics.update(epoch, logs={'loss': loss, 'val_loss': val_loss})
# 3. Get history as a DataFrame or plot it
history_df = tools.metrics.get_history_df()
print(history_df)
tools.metrics.plot_history()Tired of writing boilerplate code to see your model's performance? Use evaluate_classification for a complete summary.
import numpy as np
from ds_tools import DSTools
# 1. Initialize the toolkit
tools = DSTools()
# 2. Generate some dummy data
y_true = np.array([0, 1, 1, 0, 1, 0, 0, 1])
y_probs = np.array([0.1, 0.8, 0.6, 0.7, 0.4, 0.2, 0.4, 0.7])
# 3. Get a comprehensive evaluation report
# This will print metrics and show plots for ROC and Precision-Recall curves.
results = tools.evaluate_classification(true_labels=y_true, pred_probs=y_probs)
# The results are also returned as a dictionary
print(f"\nROC AUC Score: {results['roc_auc']:.4f}")import numpy as np
from ds_tools import DSTools
tools = DSTools()
# 1. Calculate Cohen's distance
group_a = np.random.normal(10, 2, 5000)
group_b = np.random.normal(12, 2, 5000)
effect_size = tools.metrics.cohens_d(group_a, group_b)
print(f"Effect Size: {effect_size}")
# 2. Compute FFT using Numba engine
signal = np.random.random(1000)
# Automatically pads to 1024, or to 128 at 100 as input
spectrum = tools.metrics.fft(signal, engine='numba')
print(f"Spectrum shape: {len(spectrum)}")This will produce:
- A detailed printout of key metrics (Accuracy, ROC AUC, Average Precision, etc.).
- A full classification report.
- A confusion matrix.
- Beautifully plotted ROC and Precision-Recall curves.
Example of classification metrics, report, and confusion matrix (at threshold = 0.7)

Example of precision vs recall and ROC (TPR vs FPR) curves

Example of classification metrics, report, and confusion matrix (at threshold = 0.5, for comparison)

Need to create a dataset with specific statistical properties? - generate_distribution_from_metrics can do that.
from ds_tools import DSTools, DistributionConfig
tools = DSTools()
# Define the desired metrics
metrics_config = DistributionConfig(
mean=1042,
median=330,
std=1500,
min_val=1,
max_val=120000,
skewness=13.2,
kurtosis=245, # Excess kurtosis
n=10000
)
# Generate the data
generated_data = tools.generate_distribution_from_metrics(n=10000, metrics=metrics_config)
print(f"Generated Mean: {np.mean(generated_data):.2f}")
print(f"Generated Std: {np.std(generated_data):.2f}")Comparative analysis of target statistical parameters against actual generated data results (scenario A)

Comparative analysis of target statistical parameters against actual generated data results (scenario B)

Visualize the relationships in your data with a highly customizable correlation matrix.
import numpy as np
import pandas as pd
from ds_tools import DSTools, CorrelationConfig
tools = DSTools()
data = {
'feature_a': np.random.rand(100) * 100,
'feature_b': np.random.rand(100) * 50 + 25,
'feature_c': np.random.rand(100) * -80,
}
df = pd.DataFrame(data)
df['feature_d'] = df['feature_a'] * 1.5 + np.random.normal(0, 10, 100)
# --- Generate a Spearman correlation matrix ---
config = CorrelationConfig(build_method='spearman', font_size=12)
tools.corr_matrix(df, config=config)This will display a publication-quality heatmap, masked to show only the lower triangle for clarity, using the Spearman correlation method.
Example of correlation matrix (by Pearson)

Example of correlation matrix (by Spearman)

Quickly understand the distribution of your categorical features.
import pandas as pd
from ds_tools import DSTools, CorrelationConfig
tools = DSTools()
data = {
'city': ['London', 'Paris', 'London', 'New York', 'Paris', 'London'],
'status': ['Active', 'Inactive', 'Active', 'Active', 'Inactive', 'Active']
}
df = pd.DataFrame(data)
# --- Get stats for a column ---
tools.category_stats(df, 'city')
=========================== output
city
uniq_names amount_values percentage
0 London 3 50.00
1 Paris 2 33.33
2 New York 1 16.67Helps to plot confusion matrix in graphical kind, especially for calssification tasks.
import numpy as np
from ds_tools import DSTools, CorrelationConfig
tools = DSTools()
np.random.seed(42)
N_SAMPLES = 1500
N_CLASSES = 3
random_errors = np.random.randint(1, N_CLASSES, size=N_SAMPLES)
y_true_multi = np.random.randint(0, N_CLASSES, size=N_SAMPLES)
correct_preds = np.random.rand(N_SAMPLES) < 0.75
y_pred_multi = np.where(correct_preds, y_true_multi, (y_true_multi + random_errors) % N_CLASSES)
tools.plot_confusion_matrix(
y_true_multi,
y_pred_multi,
class_labels=['Cat', 'Dog', 'Bird'],
title='Multi-Class Classification (Animals)',
cmap='YlGnBu'
)Example of confusion matrix plotting (for binary classification)

Example of confusion matrix plotting (for multiclass classification)

Example of benchmarking for MAE implementation

Example of benchmarking for RMSE implementation

Example of benchmarking for R2 implementation

Full code base for other function testing you can find here.
The library is now organized into logical modules. Here is an overview of the available toolkits:
General-purpose utilities for data analysis and manipulation.
function_list: Prints a list of all available tools.corr_matrix: Calculates and visualizes a correlation matrix.category_stats: Provides detailed statistics for categorical columns.remove_outliers_iqr: Replaces or removes outliers using the IQR method.stat_normal_testing: Performs normality tests on a distribution.... and more.
A high-performance toolkit for calculating loss functions and their gradients.
- Regression Losses:
mae,mse,rmse,huber_loss,quantile_loss. - Classification Losses:
hinge_loss,log_loss(Binary Cross-Entropy). - Embedding Losses:
triplet_loss. - Monitoring:
start_monitoring,update,get_history_df,plot_history. list_metrics: Prints a list of all available metrics.
A high-performance toolkit for calculating distances and similarities.
- Vector-to-Vector:
euclidean,manhattan,cosine_similarity,minkowski,chebyshev,mahalanobis,haversine,hamming,jaccard. - Matrix Operations:
pairwise_euclidean,kmeans_distance. - Neighbor Searches:
knn_distances,radius_neighbors. list_distances: Prints a list of all available distances.
See CONTRIBUTING and CONTRIBUTOR_LICENSE_AGREEMENT.
- Add some important kind of plots:
📌 KS Plot – Measures how well your model separates positive and negative classes.
📌 SHAP Plot – Explains feature impact and model interpretability.
📌 QQ Plot – Checks if your data follows a theoretical distribution.
📌 Cumulative Explained Variance – Helps decide the optimal number of PCA components.
📌 Gini vs Entropy – Key metrics for understanding decision tree impurity.
📌 Bias–Variance Tradeoff – Shows the balance between underfitting and overfitting.
📌 ROC Curve – Evaluates classification performance across thresholds.
📌 Precision–Recall Curve – Crucial for imbalanced datasets.
📌 Elbow Curve – Helps choose the right number of clusters in K-Means.
-
Implement Fast Fourier Transform (FFT) algorithm and Shannon’s interpolation formula: done, version 2.3.2.
-
Add some fast distance metrics (expand of existed): done, version 2.3.2.
For citing you should use:
Sergii Kavun. (2025). s-kav/ds_tools: Version 2.3.2 (v.2.3.2). Zenodo. https://doi.org/10.5281/zenodo.17080822
This project is licensed under the Apache License 2.0.
You are free to use, modify, and distribute this software in both commercial and non-commercial projects.
✨ Thanks for visiting DSTools: Data Science Research Toolkit!