Skip to content

Deterministic global optimization over trained Kolmogorov Arnold Networks

License

Notifications You must be signed in to change notification settings

process-intelligence-research/optimization-over-KANs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deterministic Global Optimization over Trained Kolmogorov-Arnold Networks (KANs) by

This repository contains the Pyomo files describing the proposed Mixed-Integer Nonlinear Programming (MINLP) formulation in the paper Deterministic Global Optimization over trained Kolmogorov Arnold Networks.

Additionally, this repository includes:

  • Python scripts to train Multilayer Perceptrons (MLP) using TensorFlow.
  • Optimization routines for trained MLPs using OMLT.
  • Code and data necessary to reproduce all results presented in the paper.

Folder structure:

  • src contains all Pyomo files required to create a Pyomo model object of a trained KAN.
  • util contains all scripts required to reproduce the results in the paper relating to data generation, training of KAN or MLP models.
  • data contains all training and testing datasets used for training the models in addition to the scaler files in JSON format required for optimizing MLPs using OMLT.
  • models contains all KAN models in JSON format which are required to instantiate a Pyomo model object and all MLP models in Keras format.

Training and Optimization

Optimizing Over a Trained KAN

  • To optimize over a trained KAN, run:
python -m opt_kan models/kan/peaks/Peaks_H1_N2_G3.json KAN_formulation_options.json scip

Arguments:

All the arguments shown in the above example should be passed with the appropriate values.

  • models/kan/peaks/Peaks_H1_N2_G3.json: Path to the trained KAN model.
  • KAN_formulation_options.json: Specifies optimization formulation. Refer to the paper for additional details.
  • scip: Optimization solver.

Important: Modify create_kan.py (in src/) to adjust input variable bounds based on the case study.

Optimizing Over a Trained MLP

To optimize over a trained MLP, run:

python -m opt_mlp --keras_model models/mlp/peaks/peaks_mlp_relu_1_16.keras --scaler_file data/peaks_scaler.json --formulation bigm --solver scip --num_inputs 2 --input_lb -3 --input_ub 3 --time_limit 7200

Key Arguments:

  • --keras_model: Path to the trained MLP model.
  • --scaler_file: Path to the JSON file for data scaling.
  • --formulation: MLP optimization method (e.g., bigm).
  • --solver: Optimization solver.
  • --input_lb, --input_ub: Lower and upper bounds of inputs.
  • --time_limit: Maximum solver time (seconds).

Reference

If you use the formulation from this paper, please consider citing it as described below.

@misc{karia2025deterministicglobaloptimizationtrained,
      title={Deterministic Global Optimization over trained Kolmogorov Arnold Networks}, 
      author={Tanuj Karia and Giacomo Lastrucci and Artur M. Schweidtmann},
      year={2025},
      eprint={2503.02807},
      archivePrefix={arXiv},
      primaryClass={math.OC},
      url={https://arxiv.org/abs/2503.02807}, 
}

Contributors

Name Links
Tanuj Karia
Giacomo Lastrucci
Artur M. Schweidtmann

Copyright and license

This repository is published under MIT license (see license file)

Copyright (C) 2025 Artur Schweidtmann Delft University of Technology.

Contact

📧 Contact

🌐 PI research

fernandezbap

About

Deterministic global optimization over trained Kolmogorov Arnold Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages