Skip to content

Fix inference precision.#802

Open
egeonur wants to merge 2 commits intoPriorLabs:mainfrom
egeonur:ege/fix-fit-mode
Open

Fix inference precision.#802
egeonur wants to merge 2 commits intoPriorLabs:mainfrom
egeonur:ege/fix-fit-mode

Conversation

@egeonur
Copy link

@egeonur egeonur commented Mar 1, 2026

Issue

#631 fix for this issue
I tried to fix the precision issue.First fix was come from #784 which doesn't add thinking tokens so that single eval pos stays zero and kv cache can be used during prediction. I casted all tensors for the given dtype so results became

no_cache vs repeat: 0.0
no_cache vs fit_preprocessors: 0.0
no_cache vs fit_with_cache: 0.0

only caveat is that when I run script with float32 there were still some inconsistencies like:

no_cache vs repeat: 5.3390077e-06
no_cache vs fit_preprocessors: 5.3390077e-06
no_cache vs fit_with_cache: 5.5486857e-06

but I am guessing it might be related to low precision 64 vs 32.

Also tests/test_consistency.py fails locally but I assume they are stored for future comparisons if sth deviates

Motivation and Context

code to run on local machine for above results:

import random
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
import numpy as np
import torch

from tabpfn import TabPFNRegressor

X, y = load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.33, random_state=42
)

def _set_seeds() -> None:
    torch.manual_seed(0)
    np.random.seed(0)
    random.seed(0)

_set_seeds()
reg = TabPFNRegressor(fit_mode="low_memory", inference_precision=torch.float64)
reg.fit(X_train, y_train)
preds_no_cache = reg.predict(X_test)

reg = TabPFNRegressor(fit_mode="low_memory", inference_precision=torch.float64)
reg.fit(X_train, y_train)
preds_no_cache_repeat = reg.predict(X_test)

_set_seeds()
reg = TabPFNRegressor(fit_mode="fit_preprocessors", inference_precision=torch.float64)
reg.fit(X_train, y_train)
preds_cache_preproc = reg.predict(X_test)

_set_seeds()
reg = TabPFNRegressor(fit_mode="fit_with_cache", inference_precision=torch.float64)
reg.fit(X_train, y_train)
preds_kv_cache = reg.predict(X_test)

def _max_diff(a: np.ndarray, b: np.ndarray) -> float:
    return np.max(np.abs(a - b) / np.abs(a))

print("max relative diffs")
print("no_cache vs no_cache_repeat:", _max_diff(preds_no_cache, preds_no_cache_repeat))
print("no_cache vs cache_preproc:", _max_diff(preds_no_cache, preds_cache_preproc))
print("no_cache vs kv_cache:", _max_diff(preds_no_cache, preds_kv_cache))

Public API Changes

  • [ X] No Public API changes
  • Yes, Public API changes (Details below)

How Has This Been Tested?

Tested locally without GPU only on macbook cpu.
Collecting system and dependency information...
PyTorch version: 2.10.0
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.7.3 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.5)
CMake version: version 3.31.1
Libc version: N/A

Python version: 3.11.9 (main, Nov 22 2024, 14:33:40) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-15.7.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Dependency Versions:

tabpfn: 6.4.1
torch: 2.10.0
numpy: 2.4.2
scipy: 1.17.1
pandas: 2.3.3
scikit-learn: 1.8.0
typing_extensions: 4.15.0
einops: 0.8.2
huggingface-hub: 1.5.0

Checklist

  • [ X] The changes have been tested locally.
  • Documentation has been updated (if the public API or usage changes).
  • A changelog entry has been added (see changelog/README.md), or "no changelog needed" label requested.
  • [X ] The code follows the project's style guidelines.
  • I have considered the impact of these changes on the public API.

Copilot AI review requested due to automatic review settings March 1, 2026 21:44
@egeonur egeonur requested a review from a team as a code owner March 1, 2026 21:44
@egeonur egeonur requested review from klemens-floege and removed request for a team March 1, 2026 21:44
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@CLAassistant
Copy link

CLAassistant commented Mar 1, 2026

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses an inference precision issue by ensuring that the specified inference_precision is respected throughout the InferenceEngineCacheKV. The changes correctly cast tensors to the desired data type, which resolves the inconsistencies noted. Additionally, the modification to conditionally add "thinking tokens" only when not using a KV cache is a logical improvement for consistency. The code is well-structured, and I have a couple of minor suggestions to enhance conciseness.

Comment on lines +777 to +781
inference_dtype = (
force_inference_dtype
if force_inference_dtype is not None
else torch.float32
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block for determining inference_dtype can be made more concise. Since torch.dtype objects are not falsy, you can use the or operator to simplify this assignment.

            inference_dtype = force_inference_dtype or torch.float32

Comment on lines +835 to +839
inference_dtype = (
self.force_inference_dtype
if self.force_inference_dtype is not None
else torch.float32
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block for determining inference_dtype can be simplified for better readability and conciseness. Using the or operator is a more idiomatic way to provide a default value in this case.

            inference_dtype = self.force_inference_dtype or torch.float32

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes prediction inconsistencies across fit_modes by aligning inference-time dtype handling and preventing KV-cache inference from adding extra “thinking” tokens that would change context length / cache behavior.

Changes:

  • Make preprocessing reproducible across repeated predict() calls by overriding preprocessing random state in the on-demand inference engine.
  • Force model parameters and input tensors to the requested inference_precision dtype for KV-cache inference.
  • Skip adding thinking tokens during KV-cache prediction (single_eval_pos == 0) to keep cacheable context stable.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/tabpfn/inference.py Adjusts preprocessing seeding override and forces inference dtype casting for KV-cache path.
src/tabpfn/architectures/base/transformer.py Avoids adding thinking tokens during KV-cache prediction to preserve cache consistency.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 365 to 369
y_train=self.y_train,
feature_schema=self.feature_schema,
parallel_mode="in-order",
override_random_state=np.random.default_rng(self.static_seed),
override_random_state=self.static_seed,
)
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

override_random_state is now passed as an int (self.static_seed). In TabPFNEnsemblePreprocessor.fit_transform_ensemble_members_iterator the random_state is selected via override_random_state or self.random_state, which will ignore an override of 0 (since 0 is falsy) and fall back to self.random_state, reintroducing non-deterministic preprocessing across predict calls. Prefer either passing a truthy override (e.g., a np.random.Generator like before) or (better) changing the downstream selection to override_random_state if override_random_state is not None else self.random_state so that seed 0 is respected.

Copilot uses AI. Check for mistakes.
Comment on lines +526 to 530
is_kv_cache_prediction = (
self.cache_trainset_representation and single_eval_pos == 0
)
if self.add_thinking_tokens is not None and not is_kv_cache_prediction:
embedded_input, single_eval_pos = self.add_thinking_tokens(
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change alters when thinking tokens are added (they’re skipped for KV-cache prediction when single_eval_pos==0). There’s currently no test covering this specific behavior/contract (e.g., that fit_with_cache prediction path doesn’t append thinking tokens and stays consistent with other fit modes for a fixed seed). Please add/adjust a unit/integration test to lock this in—re-enabling the existing skipped “fit modes return equal results” tests (or adding a targeted regression test for #631) would help prevent regressions.

Copilot uses AI. Check for mistakes.
@egeonur
Copy link
Author

egeonur commented Mar 10, 2026

@klemens-floege hey I checked the failing tests but it is not sth I changed in my pr. #757 after this change this test can be failed but my local changes don't touch modality_detection.py or test_modality_detection.py. what would you suggest? macos test I can fix. it is consistency test so no big issue I already mentioned it in pr description but other platform errors shouldn't depend on my changes. The test seems flaky or at least environment-sensitive.

@klemens-floege
Copy link
Contributor

@egeonur could you pls try a simple rebase to main? For the changelog test you need to add an .md file in the changelog folder. Thank you :)

@egeonur
Copy link
Author

egeonur commented Mar 11, 2026

@klemens-floege I rebased, fixed the consistency test and added .md file. Can you run again to see whether the tests are fixed? 🤞

@egeonur
Copy link
Author

egeonur commented Mar 11, 2026

@klemens-floege I get the same errors again but #815 had the same failing tests. my local python version is 3.11 and I guess this error happens since CI paths use Python 3.14 so I guess behaviour of the of pd.to_numeric or s.isna() changed in Python 3.14

Copy link
Contributor

@klemens-floege klemens-floege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thanks again for contributing! I ran the reproduction script locally from your branch (on CPU) and the inconsistencies are still present:

  • no_cache vs cache_preproc: 0.013
  • no_cache vs kv_cache: 1.423

A few thoughts:

On the consistency tests: I'd prefer not to adjust the random dataset inside the estimators to fix the test failures — those tests are there to verify that the internal model behaviour hasn't changed, so relaxing them isn't a great signal. If adjusting the random state inside the estimators is truly unavoidable to fix the inconsistency, that may be acceptable, but it should be a deliberate decision.

What I think would be valuable: Could you add a test to test_regressor.py and test_classifier.py that explicitly verifies the different fit_mode options produce consistent predictions? Something like:

def test_fit_mode_consistency(regressor_or_classifier):
      # assert predictions from low_memory, fit_preprocessors, fit_with_cache
      # are all within float tolerance of each other

This would both document the expected behaviour and catch regressions going forward.

@egeonur
Copy link
Author

egeonur commented Mar 12, 2026

from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
import numpy as np
import torch

from tabpfn import TabPFNRegressor

X, y = load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.33, random_state=42
)

def _set_seeds() -> None:
    torch.manual_seed(0)
    np.random.seed(0)
    random.seed(0)

_set_seeds()
reg = TabPFNRegressor(fit_mode="low_memory", inference_precision=torch.float64, device="cpu")
reg.fit(X_train, y_train)
preds_no_cache = reg.predict(X_test)

reg = TabPFNRegressor(fit_mode="low_memory", inference_precision=torch.float64, device="cpu")
reg.fit(X_train, y_train)
preds_no_cache_repeat = reg.predict(X_test)

_set_seeds()
reg = TabPFNRegressor(fit_mode="fit_preprocessors", inference_precision=torch.float64, device="cpu")
reg.fit(X_train, y_train)
preds_cache_preproc = reg.predict(X_test)

_set_seeds()
reg = TabPFNRegressor(fit_mode="fit_with_cache", inference_precision=torch.float64, device="cpu")
reg.fit(X_train, y_train)
preds_kv_cache = reg.predict(X_test)

def _max_diff(a: np.ndarray, b: np.ndarray) -> float:
    return np.max(np.abs(a - b) / np.abs(a))

print("max relative diffs")
print("no_cache vs no_cache_repeat:", _max_diff(preds_no_cache, preds_no_cache_repeat))
print("no_cache vs cache_preproc:", _max_diff(preds_no_cache, preds_cache_preproc))
print("no_cache vs kv_cache:", _max_diff(preds_no_cache, preds_kv_cache))

when I run this on with uv run my local CPU I get this result:
no_cache vs no_cache_repeat: 0.0
no_cache vs cache_preproc: 0.0
no_cache vs kv_cache: 0.0
@klemens-floege If I may ask which script did you run? egeonur:ege/fix-fit-mode this is the branch name where I fixed the issue. If you run it on the main I get the same errors as you do but that is before my fix. If it is another script can I try it as well? the override_random_state change isn't to make consistency tests pass. it's to fix the underlying inconsistency between fit modes. without that change
no_cache vs no_cache_repeat: 0.0
no_cache vs cache_preproc: 0.013375835
no_cache vs kv_cache: 0.013375835 they were still inconsistent so I changed it to fix not to past consistency test

Comparison main dtype fix only dtype fix + random state fix
no_cache vs no_cache_repeat 0.0% 0.0% 0.0%
no_cache vs cache_preproc 1.3% 1.3% 0.0%
no_cache vs kv_cache 142.3% 1.3% 0.0%

@klemens-floege
Copy link
Contributor

@egeonur I ran you script with device set to cpu:

max relative diffs
no_cache vs no_cache_repeat: 0.0
no_cache vs cache_preproc:   0.013377033
no_cache vs kv_cache:        1.4230262
tabpfn: 6.4.1 
numpy: 2.3.3
pandas: 2.3.3
scikit-learn: 1.6.1
scipy: 1.16.2
torch: 2.10.0

It could be that the pandas version makes a difference will do more digging, I think a concrete good next step in this PR is to add a new test to test_consistency.py that specifically tests this behavior:

  @pytest.mark.parametrize("estimator_cls,data_fn", [
      (TabPFNRegressor, _get_tiny_regression_data),
      (TabPFNClassifier, _get_tiny_classification_data),
  ])
  def test__fit_modes__produce_consistent_predictions(estimator_cls, data_fn):
      """All fit_mode values should produce numerically equivalent predictions."""
      X_train, y_train, X_test = data_fn()
      fit_modes = ["low_memory", "fit_preprocessors", "fit_with_cache"]
      preds = {}
      for mode in fit_modes:
          model = estimator_cls(**DEFAULT_CONFIG, fit_mode=mode)
          model.fit(X_train, y_train)
          if isinstance(model, TabPFNClassifier):
              preds[mode] = model.predict_proba(X_test)
          else:
              preds[mode] = model.predict(X_test)

      reference = preds["low_memory"]
      for mode in ["fit_preprocessors", "fit_with_cache"]:
          np.testing.assert_allclose(
              preds[mode], reference, rtol=1e-3, atol=1e-3,
              err_msg=f"fit_mode='{mode}' predictions differ from 'low_memory'",
          )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants