| title | description |
|---|---|
Quickstart |
Get started with TabPFN in minutes. |
TabPFN can be used in two ways, depending on your workflow and compute setup. Use the API for our hosted service and for production use. For testing and if you have GPUs, install the open-source package from Hugging Face.
Recommended for researchers, ML practitioners, and engineers with GPU availability. TabPFN is available as an open-source Python package on GitHub with the checkpoint for non-commercial use hosted on HuggingFace. It provides the model locally, giving you full control over your data and experimentation.
pip install tabpfnTo get access to the latest features and our TabPFN-2.5 model, please upgrade the package to the latest version:
pip install --upgrade tabpfnUse the TabPFNClassifier just like any scikit-learn estimator:
from tabpfn import TabPFNClassifier
model = TabPFNClassifier()
model.fit(X_train, y_train)
preds = model.predict(X_test)Recommended for data teams and developers who want TabPFN’s performance without managing infrastructure. The Prior Labs API provides cloud-hosted access to TabPFN-2.5 and manages GPU compute, scaling, and model versioning. You can use it either through a REST API or the Python SDK.
pip install tabpfn-clientInstantiate the hosted TabPFNClassifier for API-backed predictions:
from tabpfn_client import TabPFNClassifier
# Use TabPFN like any scikit-learn model
model = TabPFNClassifier()
model.fit(X_train, y_train)
# Get predictions
preds = model.predict(X_test)from tabpfn_client import TabPFNRegressor
# Use TabPFN like any scikit-learn model
model = TabPFNRegressor()
model.fit(X_train, y_train)
# Get predictions
preds = model.predict(X_test)To authenticate you can run a prediction, which will prompt you to sign in if needed. To log in programmatically, follow these steps:
import tabpfn_client
token = tabpfn_client.get_access_token()To log in on another machine using your access token and skip the interactive flow, use:
tabpfn_client.set_access_token(token)First, define your dataset paths and authentication details. This allows you to reuse them across fit and predict calls.
import os, json, requests
# Define your file paths
train_path = "train.csv" # path to your training dataset
test_path = "test.csv" # path to your test dataset
# Get your API key from the environment
api_key = os.getenv("PRIORLABS_API_KEY")
headers = {"Authorization": f"Bearer {api_key}"}Next, upload your training dataset to the /v1/fit endpoint. The API automatically preprocesses the data.
payload = {"task": "classification"}
files = {
"data": (None, json.dumps(payload), "application/json"),
"dataset_file": (train_path, open(train_path, "rb")),
}
fit_response = requests.post(
"https://api.priorlabs.ai/v1/fit",
headers=headers,
files=files,
)
model_id = fit_response.json().get("model_id")
print(f"☑️ Model trained: {model_id}")payload = {"task": "regression"}
files = {
"data": (None, json.dumps(payload), "application/json"),
"dataset_file": (train_path, open(train_path, "rb")),
}
fit_response = requests.post(
"https://api.priorlabs.ai/v1/fit",
headers=headers,
files=files,
)
model_id = fit_response.json().get("model_id")
print(f"☑️ Model trained: {model_id}")Once fitted, you can send your test dataset to the /v1/predict endpoint using the returned model_id. You’ll receive predictions and/or probabilities in JSON format.
payload = {
"task": "classification",
"model_id": model_id,
"params": {
"output_type": "preds" # choose 'probas' for returning probabilities
}
}
files = {
"data": (None, json.dumps(payload), "application/json"),
"dataset_file": (test_path, open(test_path, "rb")),
}
predict_response = requests.post(
"https://api.priorlabs.ai/v1/predict",
headers=headers,
files=files,
)
print("☑️ Predictions:")
print(json.dumps(predict_response.json(), indent=2))payload = {
"task": "regression",
"model_id": model_id,
"params": {
"output_type": "mean"
}
}
files = {
"data": (None, json.dumps(payload), "application/json"),
"dataset_file": (test_path, open(test_path, "rb")),
}
predict_response = requests.post(
"https://api.priorlabs.ai/v1/predict",
headers=headers,
files=files,
)
print("☑️ Predictions:")
print(json.dumps(predict_response.json(), indent=2))By default you'll get the latest TabPFN model. However, you can also specify a particular version.
See the details of the available models.
- Ensure your installation is up to date:
pip install -U tabpfn - Instantiate the classifier or regressor:
from tabpfn import TabPFNClassifier, TabPFNRegressor
from tabpfn.constants import ModelVersion
# Set the version of the TabPFNClassifier to 2.5
classifier = TabPFNClassifier.create_default_for_version(ModelVersion.V2_5)
# Set the version of the TabPFNRegressor to 2.5
regressor = TabPFNRegressor.create_default_for_version(ModelVersion.V2_5)- Ensure your installation is up to date:
pip install -U tabpfn-client - Instantiate the classifier or regressor:
classifier = TabPFNClassifier.get_default_for_version(ModelVersion.V2_5)
```python title="Regressor"
from tabpfn_client.estimator import TabPFNRegressor
from tabpfn_client.constants import ModelVersion
# Set the version of the TabPFNRegressor to 2.5
regressor = TabPFNRegressor.get_default_for_version(ModelVersion.V2_5)
Alternatively, a variety of models can be selected using the model_path option:
from tabpfn_client import TabPFNClassifier
# View the models available
print(TabPFNClassifier().list_available_models())
# Create a classifier or regressor using a particular model
classifier = TabPFNClassifier(model_path="v2.5_real")The model is selected using the model_path argument in the payload. The default model paths are as follows:
For example, the payload to specify the TabPFN-2.5 regression model is
{
"task": "classification",
"config": {
"model_path": "tabpfn-v2.5-regressor-v2.5_default.ckpt"
}
}The TabPFN (v1) model is only available through the OSS package, via the tag on GitHub.
Have questions? Check out the FAQ for answers to common topics about GPUs, limits, API usage, and more.