Skip to content
/ iwp Public

The "Batch-One" Online Learner using Convolutional Differential Logic Gate Networks

Notifications You must be signed in to change notification settings

otilor/iwp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Differentiable Logic Gate Networks: Parametrization Comparison

A PyTorch implementation comparing two parametrization approaches for differentiable logic gates:

  • Input-Wise Parametrization (IWP) — 4 parameters per gate
  • Original Parametrization (OP) — 16 parameters per gate

Overview

Differentiable Logic Gate Networks learn Boolean logic functions in a continuous, differentiable manner. Each gate learns to approximate one of the 16 possible two-input Boolean functions (AND, OR, XOR, NAND, etc.).

Parametrization Params/Gate Description
IWP 4 Directly learns output probabilities for each input combination (00, 01, 10, 11)
OP 16 Learns a softmax distribution over all 16 Boolean functions, then computes effective coefficients

Project Structure

iwp/
├── bench.py                 # Benchmarking script (CIFAR-10)
├── models/
│   ├── InputWiseGateLayer.py         # IWP implementation
│   ├── OriginalParametrizationLayer.py  # OP implementation
│   └── thermometer_encode.py         # Binary encoding for continuous inputs
└── README.md

Installation

pip install torch torchvision

Usage

Run Benchmark

python bench.py

This trains both parametrizations on CIFAR-10 and compares training speed.

Use in Your Own Model

from models.InputWiseGateLayer import InputWiseGateLayer
from models.OriginalParametrizationLayer import OriginalParametrizationLayer
from models.thermometer_encode import thermometer_encode

# Encode continuous inputs to binary
x_encoded = thermometer_encode(x, num_thresholds=15)

# Create a layer with 128 logic gates
iwp_layer = InputWiseGateLayer(num_gates=128)
op_layer = OriginalParametrizationLayer(num_gates=128)

# Forward pass expects shape [batch, num_gates * 2]
output = iwp_layer(x_encoded[:, :256])  # 128 gates × 2 inputs

How It Works

Thermometer Encoding

Converts continuous values (0-1) into binary vectors using threshold comparisons:

value = 0.7, k = 4 thresholds
thresholds = [0.2, 0.4, 0.6, 0.8]
output = [1, 1, 1, 0]  (value > threshold)

Input-Wise Parametrization

Each gate learns 4 weights (ω₀₀, ω₀₁, ω₁₀, ω₁₁) representing output probability for each input combination:

output = ω₀₀·(1-p)(1-q) + ω₀₁·(1-p)q + ω₁₀·p(1-q) + ω₁₁·pq

Uses sinusoidal activation: ω = 0.5 + 0.5·sin(raw_weight)

Original Parametrization

Learns 16 weights (one per Boolean function), applies softmax, then computes effective coefficients via truth table lookup:

probs = softmax(weights)           # [num_gates, 16]
effective = probs @ truth_table.T  # [num_gates, 4]

Configuration

Edit bench.py to adjust:

BATCH_SIZE = 1          # Batch size for training
LEARNING_RATE = 0.01    # SGD learning rate
MAX_STEPS = 5000        # Training iterations
DEVICE = "cpu"          # "cpu", "cuda", or "mps"
NUM_GATES = 128         # Number of logic gates
THRESHOLD_K = 15        # Thermometer encoding bits

License

MIT

About

The "Batch-One" Online Learner using Convolutional Differential Logic Gate Networks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages