Skip to content

saintslab/osc_reg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

osciquant

Official Source Code for OsciQuant

This repository contains the official implementation of Oscillations Make Neural Networks Robust to Quantization.

For a detailed explanation of the methodology and results, refer to our paper:

Oscillations Make Neural Networks Robust to Quantization

Usage

Importing OsciQuant

from osciquant.regularization import OsciQuantLoss
from osciquant.quantizers import UniformQuantizer
from osciquant.handler import attach_weight_quantizers, toggle_quantization

Attaching Quantizers

attach_weight_quantizers(model=model, 
                         exclude_layers=[], 
                         quantizer=UniformQuantizer(bit_width), 
                         enabled=False)

Attaches quantizers to the specified layers of the model. The weights in these layers will be regularized. When quantized_eval is set to True, the quantized weights are used during the forward pass. toggle_quantization changes the state of enabled.

Training with Regularized Loss

regularized_loss = OsciQuantLoss(base_loss=criterion, 
                                 model=model,
                                 regularization_lambda=1, 
                                 regularization=True)

The OsciQuantLoss function introduces a regularization term that induces oscillations, making the weights robust to quantization. To validate the model with quantized weights, use toggle_quantization before evaluating.

for i in epochs:
    toggle_quantization(model, enabled=False)
    ...
    train_loss = regularized_loss(pred_train, y_train)
    ...
    toggle_quantization(model, enabled=True)
    ...
    quantized_val_loss = criterion(pred_val, y_val)

Baseline Quantization Methods

To perform Post-Training Quantization (PTQ) attach the quantizers to the specified layers of the model as before, and quantize once after the model has converged.

from osciquant.quantizers import UniformQuantizer
from osciquant.handler import attach_weight_quantizers, toggle_quantization

BIT = 4  # desired quantization level 
EXCLUDE = []  # quantize all layers
QUANTIZER = UniformQuantizer(bit_width=BIT)

attach_weight_quantizers(model=model, 
                         exclude_layers=EXCLUDE, 
                         quantizer=UniformQuantizer(BIT), 
                         enabled=False)

for i in epochs:
    ...
    ### Normal full precision training
    ...

### Obtain PTQ model
toggle_quantization(model, enabled=True)

### Evaluate using the quantized model

Todo

  • feature: function for adding unique quantizer object to each layer (LSQ)
  • feature: function for cross-bit accuracy
  • bug: If layer is not used in forward, it is still regularized if it has a quantizer attached

Citation

Kindly use the following BibTeX entry if you use the code in your work.

@article{wenshøj2025osciquant,
    title={Oscillations Make Neural Networks Robust to Quantization},
    author={Jonathan Wenshøj and Bob Pepin and Raghavendra Selvan}
    journal={Transactions on Machine Learning Research (TMLR)},
    year={2025}}

About

Official source code for "Oscillations Make Neural Networks Robust to Quantization". Wenshøj et al. 2025 https://openreview.net/forum?id=bPwcJ0nkDC

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors