Lower-precision floating-point arithmetic is becoming more common, moving beyond the usual IEEE 64-bit double-precision and 32-bit single-precision formats. Today, hardware accelerators and software simulations often use reduced-precision formats, such as 16-bit half-precision, which are popular in scientific computing and machine learning. These formats boost computational speed, reduce data transfer between memory and processors, and use less energy. These benefits are most important with large datasets or real-time applications.
Pychopbrings these features to Python, inspired by MATLAB’s well-known chop function by Nick Higham. This Python library lets you quickly and reliably convert single- or double-precision numbers into any low-bitwidth format. It is flexible, so you can set up custom floating-point formats by choosing the number of exponent and significand bits, or pick fixed-point or integer quantization. This gives you control to match numerical precision and range to your algorithm, simulation, or hardware needs. It combines advanced features with ease of use. It includes many rounding modes, both deterministic and stochastic, and can handle denormal numbers as soft errors for accurate hardware emulation. The library is built for speed using vectorized operations for emulation. It also integrates directly with NumPy arrays, PyTorch tensors, and JAX arrays, so you can quantize data within your current workflow without extra conversions or performance loss.
Pychoplets you emulate low-precision arithmetic in a regular high-precision environment, so you do not need special hardware. This makes it easy to study how quantization affects stability, convergence, accuracy, and efficiency on your laptop or server. Pychopworks well for academic research needing careful control over numbers, and for software development where you want to quickly test different bit-widths to find the best balance between speed, memory use, and model quality. Pychopoffers a comprehensive solution.
The proper running environment of Pychop should by Python 3, which relies on the following dependencies: python > 3.8, numpy >=1.7.3, pandas >=2.0, torch, jax.
To install the current current release via PIP manager use:
pip install pychopBesides, one can install pychop from the conda-forge channel can be achieved by adding conda-forge to your channels with:
conda config --add channels conda-forge
conda config --set channel_priority strict
Once the conda-forge channel has been enabled, pychop can be installed with conda:
conda install pychop
or with mamba:
mamba install pychop
It is possible to list all of the versions of pychop available on your platform with conda:
conda search pychop --channel conda-forge
or with mamba:
mamba search pychop --channel conda-forge
The Pychop class offers several key advantages that make it a powerful tool for developers, researchers, and engineers working with numerical computations:
- Customizable Precision
- Multiple Rounding Modes
- Hardware-Independent Simulation
- Support for Denormal Numbers
- GPU Acceleration
- Reproducible Stochastic Rounding
- Ease of Integration
- Error Detection
- Soft error simulation
The supported floating point arithmetic formats include:
| format | description | bits |
|---|---|---|
| 'q43', 'fp8-e4m3' | NVIDIA quarter precision | 4 exponent bits, 3 significand bits |
| 'q52', 'fp8-e5m2' | NVIDIA quarter precision | 5 exponent bits, 2 significand bits |
| 'b', 'bfloat16' | bfloat16 | 8 exponent bits, 7 significand bits |
| 't', 'tf32' | TensorFloat-32 | 8 exponent bits, 10 significand bits |
| 'h', 'half', 'fp16' | IEEE half precision | 5 exponent bits, 10 significand bits |
| 's', 'single', 'fp32' | IEEE single precision | 8 exponent bits, 23 significand bits |
| 'd', 'double', 'fp64' | IEEE double precision | 11 exponent bits, 52 significand bits |
| 'c', 'custom' | custom format | - - |
Pychop support arbitrarily built-in reduced-precision types for scalar, array, and tensor. See here for detail doc. A simple example for scalar is as follows:
from pychop import Chop
from pychop.builtin import CPFloat
half = Chop(exp_bits=5, sig_bits=10, subnormal=True, rmode=1)
a = CPFloat(1.234567, half)
b = CPFloat(0.987654, half)
print(a) # CPFloat(1.23438, prec=half)
c = a + b # stays a CPFloat, chopped
print(c) # CPFloat(2.22203, prec=half)
d = a * b / 2.0
print(d) # CPFloat(0.609863, prec=half)
# mixed with a normal Python float
e = a + 3.14
print(e) # CPFloat(4.37438, prec=half)Microscaling formats use block-level shared exponents for extreme compression (2-4x vs FP16).
| format | description | element bits | block structure |
|---|---|---|---|
| 'mxfp8_e5m2' | OCP MX FP8 E5M2 | 8 (5 exp + 2 sig) | 32 elements + E8M0 scale |
| 'mxfp8_e4m3' | OCP MX FP8 E4M3 | 8 (4 exp + 3 sig) | 32 elements + E8M0 scale |
| 'mxfp6_e3m2' | OCP MX FP6 E3M2 | 6 (3 exp + 2 sig) | 32 elements + E8M0 scale |
| 'mxfp6_e2m3' | OCP MX FP6 E2M3 | 6 (2 exp + 3 sig) | 32 elements + E8M0 scale |
| 'mxfp4_e2m1' | OCP MX FP4 E2M1 | 4 (2 exp + 1 sig) | 32 elements + E8M0 scale |
| custom MX | user-defined | any E/M combinati |
Key Features of MX Formats:
- 🚀 2-4x compression vs FP16 while maintaining accuracy
- 🎯 Block-level shared scale factor (typically 32 elements per block)
- 🔧 Fully customizable: any
(exp_bits, sig_bits)combination supported - 📦 Flexible block sizes: 8, 16, 32, 64, 128, or custom
- ⚙️ Adjustable scale precision: E6M0, E8M0, E10M0, or custom
from pychop.mx_formats import MXTensor, mx_quantize
# Predefined MX format
X_mx = mx_quantize(X, format='mxfp8_e4m3', block_size=32)
# Custom MX format (E5M4 elements)
mx_tensor = MXTensor(X, format=(5, 4), block_size=64)
# Custom with larger scale range
mx_tensor = MXTensor(X, format=(4, 3), scale_exp_bits=10, block_size=32)
# Ultra-low precision (3-bit elements!)
mx_tensor = MXTensor(X, format=(1, 1), block_size=16)We will go through the main functionality of Pychop; for details refer to the documentation.
Users can specify the number of exponent (exp_bits) and significand (sig_bits) bits, enabling precise control over the trade-off between range and precision. For example, setting exp_bits=5 and sig_bits=4 creates a compact 10-bit format (1 sign, 5 exponent, 4 significand), ideal for testing minimal precision scenarios.
Rounding the values with specified precision format:
Pychop supports faster low-precision floating point quantization and also enables GPU emulation (simply move the input to GPU device), with different rounding functions:
import pychop
from pychop import Chop
import numpy as np
np.random.seed(0)
X = np.random.randn(5000, 5000)
pychop.backend('numpy', 1) # Specify different backends, e.g., jax and torch
# One can also specify 'auto', the pychop will automatically detect the types,
# but speed will be degraded.
ch = Chop(exp_bits=5, sig_bits=10, rmode=3) # half precision
X_q = ch(X)
print(X_q[:10, 0])If one is not seeking optimized performance and more emulation supports, one can use the following example.
Pychop also provides same functionalities just like Higham's chop [1] that supports soft error simulation (by setting flip=True), but with relatively degraded speed:
from pychop import FaultChop
ch = FaultChop('h') # Standard IEEE 754 half precision
X_q = ch(X) # Rounding valuesOne can also customize the precision via:
from pychop import Customs
from pychop import FaultChop
pychop.backend('numpy', 1)
ct1 = Customs(exp_bits=5, sig_bits=10) # half precision (5 exponent bits, 10+(1) significand bits, (1) is implicit bits)
ch = FaultChop(customs=ct1, rmode=3) # Round towards minus infinity
X_q = ch(X)
print(X_q[:10, 0])
ct2 = Customs(emax=15, t=11)
ch = FaultChop(customs=ct2, rmode=3)
X_q = ch(X)
print(X_q[:10, 0])To enable quantization aware training, a sequential neural network can be built with derived quantied layer (seamlessly integrated with Straight-Through Estimator):
import torch.nn as nn
from pychop.layers import *
class MLP(nn.Module):
def __init__(self, chop=None):
super(MLP, self).__init__()
self.flatten = nn.Flatten()
self.fc1 = QuantizedLinear(256, 256, chop=chop)
self.relu1 = nn.ReLU()
self.dropout = nn.Dropout(0.2)
self.fc2 = QuantizedLinear(256, 10, chop=chop)
# 5 exponent bits, 10 explicit significant bits , round to nearest ties to even
def forward(self, x):
x = self.flatten(x)
x = self.fc1(x)
x = self.relu1(x)
x = self.dropout(x)
x = self.fc2(x)
return xTo enable quantization-aware training, one need to pass floating-point chopper ChopSTE or fixed-point chopper ChopfSTE to the parameter chop, for details of example. we refer to example_CNN_ft.py and example_CNN_fp.py
For integer quantization, please see example_CNN_int.py.
Similar to floating point quantization, one can set the corresponding backend. The dominant parameters are ibits and fbits, which are the bitwidths of the integer part and the fractional part, respectively.
pychop.backend('numpy')
from pychop import Chopf
ch = Chopf(ibits=4, fbits=4)
X_q = ch(X)The code example can be found on the guidance1 and guidance2.
Integer quantization is another important feature of pychop. It intention is to convert the floating point number into a low bit-width integer, which speeds up the computations in certain computing hardware. It performs quantization with user-defined bitwidths. The following example illustrates the usage of the method.
The integer arithmetic emulation of Pychop is implemented by the interface Chopi. It can be used in many circumstances, and offers flexible options for users, such as symmetric or unsymmetric quantization and the number of bits to use. The usage is illustrated as below:
import numpy as np
from pychop import Chopi
pychop.backend('numpy')
X = np.array([[0.1, -0.2], [0.3, 0.4]])
ch = Chopi(bits=8, symmetric=False)
X_q = ch.quantize(X) # Convert to integers
X_dq = ch.dequantize(X_q) # Convert back to floating pointsIf you use Python virtual environments in MATLAB, ensure MATLAB detects it:
pe = pyenv('Version', 'your_env\python.exe'); % or simply pe = pyenv();To use Pychop in your MATLAB environment, similarly, simply load the Pychopmodule:
pc = py.importlib.import_module('pychop');
ch = pc.Chop(exp_bits=5, sig_bits=10, rmode=1)
X = rand(100, 100);
X_q = ch(X);Or more specifically, use
np = py.importlib.import_module('numpy');
pc = py.importlib.import_module('pychop');
ch = pc.Chop(exp_bits=5, sig_bits=10, rmode=1)
X = np.random.randn(int32(100), int32(100));
X_q = ch(X);-
Machine Learning: Test the impact of low-precision arithmetic on model accuracy and training stability, especially for resource-constrained environments like edge devices.
-
Hardware Design: Simulate custom floating-point units before hardware implementation, optimizing bit allocations for specific applications.
-
Numerical Analysis: Investigate quantization errors and numerical stability in scientific computations.
-
Education: Teach concepts of floating-point representation, rounding, and denormal numbers with a hands-on, customizable tool.
Our software is licensed under License MIT. We welcome contributions in any form! Assistance with documentation is always welcome. To contribute, feel free to open an issue or please fork the project make your changes and submit a pull request. We will do our best to work through any issues and requests.
This project is supported by the European Union (ERC, InEXASCALE, 101075632). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
If you use Pychop in your research or simulations, cite:
@misc{carson2025,
title={Pychop: Emulating Low-Precision Arithmetic in Numerical Methods and Neural Networks},
author={Erin Carson and Xinye Chen},
year={2025},
eprint={2504.07835},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.07835},
}[1] Nicholas J. Higham and Srikara Pranesh, Simulating Low Precision Floating-Point Arithmetic, SIAM J. Sci. Comput., 2019.
[2] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2019 (revision of IEEE Std 754-2008), IEEE, 2019.
[3] Intel Corporation, BFLOAT16---hardware numerics definition, 2018
[4] Muller, Jean-Michel et al., Handbook of Floating-Point Arithmetic, Springer, 2018
