Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
0cf47f3
refactor: move import statement from datasets inside method for laziness
Irozuku Feb 5, 2026
17aa657
refactor: move import statements inside the fit method for improved l…
Irozuku Feb 5, 2026
c84d222
refactor: move imports inside methods for lazy loading for stable dif…
Irozuku Feb 5, 2026
94b025a
refactor: move import statements inside the transform method for impr…
Irozuku Feb 5, 2026
fb2c10c
refactor: move import statements inside methods for lazy loading in e…
Irozuku Feb 5, 2026
38c14c9
refactor: move import statements inside methods for lazy loading in t…
Irozuku Feb 5, 2026
d2772ec
refactor: move SMOTE import inside the constructor for lazy loading i…
Irozuku Feb 5, 2026
06142cd
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
b77e09e
refactor: move pandas and CountVectorizer imports inside methods for …
Irozuku Feb 5, 2026
59e5dbf
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
b980533
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
becd504
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
ec18b35
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
7a3def2
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
fbbaf93
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
6d5220d
refactor: move pyarrow and LabelEncoder import inside the get_output_…
Irozuku Feb 5, 2026
6d38e1f
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
d4fd4a1
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
457be56
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
43aea53
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
62a74c0
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
197c973
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
ae06729
refactor: move pyarrow import inside the OrdinalEncoder method for la…
Irozuku Feb 5, 2026
cc02662
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
b30a73c
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
6349384
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
a8eba4d
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
b6683d4
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
8f66982
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
3e2b07c
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
b7456ed
refactor: move pyarrow import inside the get_output_type method for l…
Irozuku Feb 5, 2026
802fe96
refactor: move pyarrow and numpy imports inside methods for lazy load…
Irozuku Feb 5, 2026
9263664
refactor: move pandas and datasets imports inside methods for lazy lo…
Irozuku Feb 5, 2026
f52b422
refactor: move pandas and datasets imports inside methods for lazy lo…
Irozuku Feb 5, 2026
3730203
refactor: move imports inside methods for lazy loading in KernelShap,…
Irozuku Feb 5, 2026
b6b152d
refactor: move imports inside methods for lazy loading in all explorers
Irozuku Feb 5, 2026
fe7f23d
refactor: remove unused imports and optimize modify_table usage in Co…
Irozuku Feb 5, 2026
d7b4eab
refactor: reorganize imports and remove unused gc import in DatasetJob
Irozuku Feb 5, 2026
263bf83
refactor: simplify import statements in explorer_job by consolidating…
Irozuku Feb 5, 2026
640e6e2
refactor: remove lazy imports to avoid duplicate and unused imports w…
Irozuku Feb 5, 2026
50ae6b5
refactor: consolidate import statements for kink in PredictJob
Irozuku Feb 5, 2026
c5d54d9
refactor: move imports inside score methods to reduce global scope in…
Irozuku Feb 5, 2026
014a42d
refactor: move imports inside score methods to reduce global scope in…
Irozuku Feb 5, 2026
f14e62c
refactor: move imports inside score methods to reduce global scope in…
Irozuku Feb 5, 2026
7bf0508
refactor: move imports inside score methods to reduce global scope in…
Irozuku Feb 5, 2026
ce38e67
refactor: move imports inside score methods to reduce global scope in…
Irozuku Feb 5, 2026
8ad7168
refactor: move imports inside score methods to reduce global scope in…
Irozuku Feb 5, 2026
46c1b86
refactor: move imports inside methods to reduce global scope in Disti…
Irozuku Feb 5, 2026
ff0233a
refactor: move imports inside methods to reduce global scope in OpusM…
Irozuku Feb 5, 2026
053fd49
refactor: move Llama import inside the QwenModel constructor to reduc…
Irozuku Feb 5, 2026
4635484
refactor: move imports inside methods to reduce global scope in BagOf…
Irozuku Feb 5, 2026
f37fe5b
refactor: move imports inside methods to reduce global scope in BaseO…
Irozuku Feb 5, 2026
840ee83
refactor: move pyarrow import inside methods to reduce global scope i…
Irozuku Feb 5, 2026
710506e
refactor: move imports inside functions and methods to minimize globa…
Irozuku Feb 5, 2026
bf3a45a
refactor: move pyarrow and pandas imports inside functions to reduce …
Irozuku Feb 5, 2026
91b735c
refactor: remove unnecessary import statements to clean up code in da…
Irozuku Feb 5, 2026
99e879a
fix: apply pre commit to various files
Irozuku Feb 5, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions DashAI/back/converters/hugging_face/embedding.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,4 @@
import pyarrow as pa
import torch
from datasets import Dataset, concatenate_datasets
from transformers import AutoModel, AutoTokenizer
"""HuggingFace embedding converter with lazy-loaded dependencies."""

from DashAI.back.converters.category.advanced_preprocessing import (
AdvancedPreprocessingConverter,
Expand Down Expand Up @@ -104,16 +101,23 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float32 as the output type for embeddings."""
import pyarrow as pa

return Float(arrow_type=pa.float32())

def _load_model(self):
"""Load the embedding model and tokenizer."""
from transformers import AutoModel, AutoTokenizer

self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
self.model = AutoModel.from_pretrained(self.model_name).to(self.device)
self.model.eval()

def _process_batch(self, batch: DashAIDataset) -> DashAIDataset:
"""Process a batch of text into embeddings."""
import torch
from datasets import Dataset, concatenate_datasets

all_column_embeddings = []

for column in batch.column_names:
Expand Down
7 changes: 4 additions & 3 deletions DashAI/back/converters/hugging_face/tokenizer.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
from datasets import Dataset, concatenate_datasets
from transformers import AutoTokenizer

from DashAI.back.converters.category.advanced_preprocessing import (
AdvancedPreprocessingConverter,
)
Expand Down Expand Up @@ -87,12 +84,16 @@ def __init__(self, **kwargs):

def _load_model(self):
"""Load tokenizer only."""
from transformers import AutoTokenizer

self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)

def _process_batch(self, batch: DashAIDataset) -> DashAIDataset:
"""
Tokenize a batch of text columns and store each input_id in a separate column.
"""
from datasets import Dataset, concatenate_datasets

all_column_tokens = []

for column in batch.column_names:
Expand Down
4 changes: 2 additions & 2 deletions DashAI/back/converters/hugging_face_wrapper.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
from abc import ABCMeta, abstractmethod
from typing import Type

from datasets import concatenate_datasets

from DashAI.back.converters.base_converter import BaseConverter
from DashAI.back.dataloaders.classes.dashai_dataset import DashAIDataset
from DashAI.back.types.dashai_data_type import DashAIDataType
Expand Down Expand Up @@ -49,6 +47,8 @@ def fit(self, x: DashAIDataset, y: DashAIDataset = None) -> Type[BaseConverter]:

def transform(self, x: DashAIDataset, y: DashAIDataset = None) -> DashAIDataset:
"""Transform the input data using the model."""
from datasets import concatenate_datasets

all_results = []

# Process in batches
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
from imblearn.combine import SMOTEENN
from imblearn.over_sampling import SMOTE

from DashAI.back.converters.category.sampling import SamplingConverter
from DashAI.back.converters.imbalanced_learn_wrapper import ImbalancedLearnWrapper
Expand Down Expand Up @@ -61,6 +60,8 @@ class SMOTEENNConverter(SamplingConverter, ImbalancedLearnWrapper, SMOTEENN):
IMAGE_PREVIEW = "smoteenn.png"

def __init__(self, **kwargs):
from imblearn.over_sampling import SMOTE

self.smote = SMOTE(
sampling_strategy=kwargs.get("sampling_strategy", "auto"),
random_state=kwargs.get("random_state"),
Expand Down
10 changes: 5 additions & 5 deletions DashAI/back/converters/imbalanced_learn_wrapper.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
from abc import ABCMeta
from typing import Type, Union

import numpy as np
import pandas as pd
import pyarrow as pa

from DashAI.back.converters.base_converter import BaseConverter
from DashAI.back.dataloaders.classes.dashai_dataset import DashAIDataset
from DashAI.back.job.base_job import JobError
Expand All @@ -20,7 +16,7 @@ class ImbalancedLearnWrapper(BaseConverter, metaclass=ABCMeta):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.fitted = False
self._resampled_table: Union[pa.Table, None] = None
self._resampled_table = None
self.original_X_column_names_: list = []
self.original_target_column_name_: str = ""

Expand All @@ -44,6 +40,10 @@ def fit(self, x: DashAIDataset, y: DashAIDataset) -> Type[BaseConverter]:
Fit the sampler using imbalanced-learn's fit_resample and store the combined
result.
"""
import numpy as np
import pandas as pd
import pyarrow as pa

if y is None or len(y) == 0:
raise ValueError(
"Imbalanced-learn samplers require a non-empty target dataset (y)."
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.kernel_approximation import (
AdditiveChi2Sampler as AdditiveChi2SamplerOperation,
)
Expand Down Expand Up @@ -57,4 +56,6 @@ class AdditiveChi2Sampler(

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for transformed data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
7 changes: 4 additions & 3 deletions DashAI/back/converters/scikit_learn/bag_of_words.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer

from DashAI.back.converters.base_converter import BaseConverter
from DashAI.back.converters.category.advanced_preprocessing import (
AdvancedPreprocessingConverter,
Expand Down Expand Up @@ -90,6 +87,8 @@ class BagOfWordsConverter(AdvancedPreprocessingConverter, BaseConverter):

def __init__(self, **kwargs):
super().__init__()
from sklearn.feature_extraction.text import CountVectorizer

self.vectorizer = CountVectorizer(
max_features=kwargs.get("max_features", 1000),
lowercase=kwargs.get("lowercase", True),
Expand All @@ -111,6 +110,8 @@ def fit(self, x: DashAIDataset, y=None) -> "BagOfWordsConverter":

def transform(self, x: DashAIDataset, y=None) -> DashAIDataset:
"""Transform text into Bag-of-Words frequency columns."""
import pandas as pd

if not self.fitted:
raise RuntimeError("The converter must be fitted before calling transform.")

Expand Down
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/binarizer.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import Binarizer as BinarizerOperation

from DashAI.back.converters.category.encoding import EncodingConverter
Expand Down Expand Up @@ -51,4 +50,6 @@ class Binarizer(EncodingConverter, SklearnWrapper, BinarizerOperation):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Integer64 as the output type for binarized data."""
import pyarrow as pa

return Integer(arrow_type=pa.int64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/cca.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.cross_decomposition import CCA as CCAOPERATION

from DashAI.back.converters.category.advanced_preprocessing import (
Expand Down Expand Up @@ -72,4 +71,6 @@ class CCA(AdvancedPreprocessingConverter, SklearnWrapper, CCAOPERATION):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for transformed data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/fast_ica.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.decomposition import FastICA as FastICAOperation

from DashAI.back.api.utils import (
Expand Down Expand Up @@ -161,4 +160,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for transformed data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.feature_selection import (
GenericUnivariateSelect as GenericUnivariateSelectOperation,
)
Expand Down Expand Up @@ -62,4 +61,6 @@ class GenericUnivariateSelect(

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for selected features."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/knn_imputer.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.impute import KNNImputer as KNNImputerOperation

from DashAI.back.converters.category.basic_preprocessing import (
Expand Down Expand Up @@ -88,4 +87,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for imputed data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/label_binarizer.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import LabelBinarizer as LabelBinarizerOperation

from DashAI.back.converters.category.encoding import EncodingConverter
Expand Down Expand Up @@ -44,4 +43,6 @@ class LabelBinarizer(EncodingConverter, SklearnWrapper, LabelBinarizerOperation)

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Integer64 as the output type for binarized labels."""
import pyarrow as pa

return Integer(arrow_type=pa.int64())
7 changes: 4 additions & 3 deletions DashAI/back/converters/scikit_learn/label_encoder.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
from typing import Union

import pyarrow as pa
from sklearn.preprocessing import LabelEncoder as LabelEncoderOperation

from DashAI.back.converters.category.encoding import EncodingConverter
from DashAI.back.converters.sklearn_wrapper import SklearnWrapper
from DashAI.back.core.schema_fields.base_schema import BaseSchema
Expand Down Expand Up @@ -51,6 +48,8 @@ def get_output_type(self, column_name: str = None) -> DashAIDataType:
If the encoder has been fitted and has classes_, use them to create
a proper categorical type.
"""
import pyarrow as pa

if column_name and column_name in self.encoders:
encoder = self.encoders[column_name]
if hasattr(encoder, "classes_"):
Expand All @@ -63,6 +62,8 @@ def get_output_type(self, column_name: str = None) -> DashAIDataType:

def fit(self, x: DashAIDataset, y: Union[DashAIDataset, None] = None):
"""Fit label encoders to each column in the dataset."""
from sklearn.preprocessing import LabelEncoder as LabelEncoderOperation

x_pandas = x.to_pandas()

for col in x_pandas.columns:
Expand Down
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/max_abs_scaler.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import MaxAbsScaler as MaxAbsScalerOperation

from DashAI.back.converters.category.scaling_and_normalization import (
Expand Down Expand Up @@ -39,4 +38,6 @@ class MaxAbsScaler(

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for scaled data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/min_max_scaler.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import MinMaxScaler as MinMaxScalerOperation

from DashAI.back.converters.category.scaling_and_normalization import (
Expand Down Expand Up @@ -74,4 +73,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for scaled data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/missing_indicator.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.impute import MissingIndicator as MissingIndicatorOperation

from DashAI.back.converters.category.basic_preprocessing import (
Expand Down Expand Up @@ -35,4 +34,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Integer64 as the output type for binary indicators."""
import pyarrow as pa

return Integer(arrow_type=pa.int64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/normalizer.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import Normalizer as NormalizerOperation

from DashAI.back.converters.category.scaling_and_normalization import (
Expand Down Expand Up @@ -45,4 +44,6 @@ class Normalizer(ScalingAndNormalizationConverter, SklearnWrapper, NormalizerOpe

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for normalized data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/nystroem.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.kernel_approximation import Nystroem as NystroemOperation

from DashAI.back.api.utils import create_random_state, parse_string_to_dict
Expand Down Expand Up @@ -133,4 +132,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for transformed data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/one_hot_encoder.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import OneHotEncoder as OneHotEncoderOperation

from DashAI.back.api.utils import cast_string_to_type, parse_string_to_list
Expand Down Expand Up @@ -118,4 +117,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Integer64 as the output type for one-hot encoded data."""
import pyarrow as pa

return Integer(arrow_type=pa.int64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/ordinal_encoder.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import OrdinalEncoder as OrdinalEncoderOperation

from DashAI.back.api.utils import cast_string_to_type
Expand Down Expand Up @@ -116,6 +115,8 @@ def get_output_type(self, column_name: str = None) -> DashAIDataType:
Returns Categorical type with encoded values.
After fitting, categories are encoded as integers.
"""
import pyarrow as pa

# Return a placeholder categorical type
# The actual categories will be set by sklearn_wrapper's transform method
return Categorical(values=pa.array(["0", "1"]))
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/pca.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.decomposition import PCA as PCAOPERATION

from DashAI.back.api.utils import create_random_state
Expand Down Expand Up @@ -179,4 +178,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for PCA components."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/polynomial_features.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.preprocessing import PolynomialFeatures as PolynomialFeaturesOperation

from DashAI.back.converters.category.polynomial_kernel import PolynomialKernelConverter
Expand Down Expand Up @@ -94,4 +93,6 @@ class PolynomialFeatures(

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for polynomial features."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/rbf_sampler.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.kernel_approximation import RBFSampler as RBFSamplerOperation

from DashAI.back.api.utils import create_random_state
Expand Down Expand Up @@ -80,4 +79,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for transformed data."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
3 changes: 2 additions & 1 deletion DashAI/back/converters/scikit_learn/select_fdr.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pyarrow as pa
from sklearn.feature_selection import SelectFdr as SelectFdrOperation

from DashAI.back.converters.category.feature_selection import FeatureSelectionConverter
Expand Down Expand Up @@ -45,4 +44,6 @@ def __init__(self, **kwargs):

def get_output_type(self, column_name: str = None) -> DashAIDataType:
"""Returns Float64 as the output type for selected features."""
import pyarrow as pa

return Float(arrow_type=pa.float64())
Loading