A PyTorch implementation comparing two parametrization approaches for differentiable logic gates:
- Input-Wise Parametrization (IWP) — 4 parameters per gate
- Original Parametrization (OP) — 16 parameters per gate
Differentiable Logic Gate Networks learn Boolean logic functions in a continuous, differentiable manner. Each gate learns to approximate one of the 16 possible two-input Boolean functions (AND, OR, XOR, NAND, etc.).
| Parametrization | Params/Gate | Description |
|---|---|---|
| IWP | 4 | Directly learns output probabilities for each input combination (00, 01, 10, 11) |
| OP | 16 | Learns a softmax distribution over all 16 Boolean functions, then computes effective coefficients |
iwp/
├── bench.py # Benchmarking script (CIFAR-10)
├── models/
│ ├── InputWiseGateLayer.py # IWP implementation
│ ├── OriginalParametrizationLayer.py # OP implementation
│ └── thermometer_encode.py # Binary encoding for continuous inputs
└── README.md
pip install torch torchvisionpython bench.pyThis trains both parametrizations on CIFAR-10 and compares training speed.
from models.InputWiseGateLayer import InputWiseGateLayer
from models.OriginalParametrizationLayer import OriginalParametrizationLayer
from models.thermometer_encode import thermometer_encode
# Encode continuous inputs to binary
x_encoded = thermometer_encode(x, num_thresholds=15)
# Create a layer with 128 logic gates
iwp_layer = InputWiseGateLayer(num_gates=128)
op_layer = OriginalParametrizationLayer(num_gates=128)
# Forward pass expects shape [batch, num_gates * 2]
output = iwp_layer(x_encoded[:, :256]) # 128 gates × 2 inputsConverts continuous values (0-1) into binary vectors using threshold comparisons:
value = 0.7, k = 4 thresholds
thresholds = [0.2, 0.4, 0.6, 0.8]
output = [1, 1, 1, 0] (value > threshold)
Each gate learns 4 weights (ω₀₀, ω₀₁, ω₁₀, ω₁₁) representing output probability for each input combination:
output = ω₀₀·(1-p)(1-q) + ω₀₁·(1-p)q + ω₁₀·p(1-q) + ω₁₁·pq
Uses sinusoidal activation: ω = 0.5 + 0.5·sin(raw_weight)
Learns 16 weights (one per Boolean function), applies softmax, then computes effective coefficients via truth table lookup:
probs = softmax(weights) # [num_gates, 16]
effective = probs @ truth_table.T # [num_gates, 4]
Edit bench.py to adjust:
BATCH_SIZE = 1 # Batch size for training
LEARNING_RATE = 0.01 # SGD learning rate
MAX_STEPS = 5000 # Training iterations
DEVICE = "cpu" # "cpu", "cuda", or "mps"
NUM_GATES = 128 # Number of logic gates
THRESHOLD_K = 15 # Thermometer encoding bitsMIT