This project contains the prototype system described in:
Junliang Hu, Zhisheng Hu, Chun-Feng Wu, and Ming-Chang Yang. 2025. Demeter: A Scalable and Elastic Tiered Memory Solution for Virtualized Cloud via Guest Delegation. In ACM SIGOPS 31st Symposium on Operating Systems Principles (SOSP '25), October 13–16, 2025, Seoul, Republic of Korea. ACM, New York, NY, USA, 17 pages. https://doi.org/10.1145/3731569.3764801
Demeter is a tiered memory management solution designed for virtualized environments that builds on modified versions of the Linux kernel and Cloud Hypervisor. This artifact allows evaluators to reproduce the key performance claims from our SOSP'25 paper.
Key Claims to Validate:
- Demeter improves performance by up to 2× compared to existing hypervisor-based (TPP-H) approaches
- Demeter achieves 28% average improvement (geometric mean) compared to the next best guest-based alternative (TPP)
Evaluation Time: ~31 hours total excluding environment preparation (26 hours for guest-delegated + 5 hours for hypervisor-based experiments)
Hardware Requirements:
- Dual socket Intel Ice Lake server with:
- At least 36 physical cores per CPU package
- At least 128GiB DDR4 DRAM paired with 512GiB Intel Optane PMEM 200 series per socket
- At least 1TiB available space on NVMe SSD
Software Requirements:
-
Latest Clear Linux OS (version ≥43760) with development bundles:
sudo swupd bundle-add os-clr-on-clr dev-utils
-
Python 3.13 environment (setup instructions provided below)
Important Notes:
- This evaluation requires specific hardware (Intel Ice Lake + Optane PMEM) and cannot be run on other configurations
- We strongly recommend using
tmuxto prevent interruption during long-running experiments - The artifact requires ~31 hours of continuous execution time
python3 -m venv py313
source py313/bin/activate
pip install fire drgn jq pandas altair[all] poetry pydantic fabric
# Fix Python 3.13 compatibility issue
sed -i 's/pipes/shlex/g' py313/lib/python3.13/site-packages/fire/trace.py
sed -i 's/pipes/shlex/g' py313/lib/python3.13/site-packages/fire/core.pymake -f toolchain.mkThis installs reproducible compilation infrastructure (LLVM/Clang and Rust toolchains).
make -f kernel.mk buildBuilds all required kernels (Demeter and baseline implementations). Uses ccache for faster subsequent builds.
Expected Output:
- Source trees in
kernel/directory - Built kernels in
build/directory - Key kernels:
demeter,sota(containing TPP, Nomad, Memtis variants (softlink))
make -f hypervisor.mk buildBuilds the modified Cloud Hypervisor with Demeter patches.
make -f workload.mkCompiles all seven evaluation workloads and downloads required datasets.
make -f bin.mk
make -f workload.mk installCollects all required binaries for the benchmark framework.
make -f kernel.mk install-hostVerify Installation:
sudo bootctl listConfirm you see entries for both:
linux-6.10.0-demeterhost.conf(for guest-delegated experiments)linux-5.15.162-tpphost.conf(for hypervisor-based experiments)
Switch Kernels as Needed:
# For guest-delegated experiments (Demeter, TPP, Nomad, Memtis)
sudo bootctl set-oneshot linux-6.10.0-demeterhost.conf
sudo reboot
# For hypervisor-based experiments (TPP-H)
sudo bootctl set-oneshot linux-5.15.162-tpphost.conf
sudo reboot# Kill zombie processes
echo "cloud-hyperviso virtiofsd pcm-memory gdb" | xargs -n1 sudo pkill -9
# System configuration
ulimit -n 65535 # Increase file limits
sudo swapoff --all # Disable swap
sudo sysctl -w vm.overcommit_memory=1 # Enable memory overcommit
echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null # Clear page cache
# Lock CPU frequency
echo 3000000 | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_{min,max}_freq >/dev/null
# Configure PMEM as NUMA nodes
sudo daxctl reconfigure-device --human --mode=system-ram all || true
# Setup VM networking
sudo script/network.bash --restartBefore running full experiments, verify the setup works:
# Activate Python environment
source py313/bin/activate
export PATH="$(realpath bin):$PATH"
# Make sure the PMEM is configured as NUMA node
sudo numactl --hardware
# Make sure there is network bridge named virbr921
sudo ip a
# Test with 3 VMs
poetry -C bench install
poetry -C bench run python3 -m bench \
--num 3 --kernel demeter --mem 17179869184 \
--dram-ratio 0.2 --dram_node 0 --pmem_node 2 \
run 'echo "Hello, Demeter!" | sudo tee /out/hello.log'Expected: Logs appear in a timestamp taged folder under bench/archive/, containing "Hello, Demeter!" message in hello.log.
Prerequisites:
- Current kernel:
6.10.0-demeterhost(verify withuname -r) - System configured with environment setup script
# Ensure correct kernel
if [[ $(uname -r) != "6.10.0-demeterhost" ]]; then
echo "Wrong kernel! Switch to demeterhost kernel and reboot."
exit 1
fi
# Run environment setup again
# Start evaluation
poetry -C bench run pytest tests/evaluation.py -k test_delegated_realworld_workloadsWhat This Tests:
- Demeter vs. TPP, Nomad, and Memtis on all 7 workloads
- ~26 hours total runtime
- Progress visible in terminal output
Prerequisites:
- Current kernel:
5.15.162-tpphost(verify withuname -r) (It is recommended to run this part with a clean environment, i.e. right after a fresh reboot)
# Switch kernel if needed
if [[ $(uname -r) != "5.15.162-tpphost" ]]; then
sudo bootctl set-oneshot linux-5.15.162-tpphost.conf
sudo reboot
fi
# Run environment setup again
# Enable TPP tiering
sudo python3 script/tpp.py echo
# Start evaluation
poetry -C bench run pytest tests/evaluation.py -k test_hypervisor_realworld_workloadsWhat This Tests:
- TPP-H (hypervisor-based) baseline on all 7 workloads
- ~5 hours total runtime
After both experiment phases complete:
# Find your log directories
ls -la bench/archive/
# Run analysis (replace paths with your actual log directories)
python3 script/plot.py \
--guest_log_dir bench/archive/YYYY-MM-DDTHH:MM:SS-test_delegated_realworld_workloads/ \
--host_log_dir bench/archive/YYYY-MM-DDTHH:MM:SS-test_hypervisor_realworld_workloads/Expected Output:
chart.svgfile with performance comparison- Results should show:
- Up to 2× improvement over TPP-H (hypervisor-based)
- (Calculate geometric mean manually for vmnum=9) 28% average improvement over TPP (guest-based)
The evaluation includes seven real-world applications:
- silo: In-memory OLTP database (transaction processing)
- btree: High-performance B-tree index engine (data structures)
- graph500: Graph processing benchmark (graph analytics)
- pagerank: Twitter social network analysis
- liblinear: Machine learning with KDD CUP 2010 dataset (ML training)
- bwaves: Blast wave scientific simulation (HPC)
- xsbench: Nuclear reactor physics simulation (scientific computing)
- Kernel Boot Issues: Verify boot entries with
sudo bootctl listand ensure memory mapping parameters are correct - VM Startup Failures: Check that networking is properly configured and no zombie processes remain
- Out of Memory: Ensure swap is disabled and memory overcommit is enabled
- Missing Dependencies: Ensure all compilation steps completed successfully and binaries exist
sosp25demeter:
type: article
date: 2025
author: [Junliang Hu, Zhisheng Hu, Chun-Feng Wu, Ming-Chang Yang]
title: "Demeter: A Scalable and Elastic Tiered Memory Solution for Virtualized Cloud via Guest Delegation"
serial-number:
doi: 10.1145/3731569.3764801
isbn: 979-8-4007-1870-0/25/10
parent:
type: proceedings
title: "31st Symposium on Operating Systems Principles (SOSP '25)"
date: 2026-10
location: Seoul, Republic of Korea
publisher: ACM, New York, NY, USA
tags: ["proceedings"]@inproceedings{sosp25demeter,
address = {New York, NY, USA},
author = {Hu, Junliang and Hu, Zhisheng and Wu, Chun-Feng and Yang, Ming-Chang},
booktitle = {Proceedings of the 31st Symposium on Operating Systems Principles},
doi = {10.1145/3731569.3764801},
isbn = {9798400718700},
keywords = {Virtual Machine, Virtual Memory, Operating System, Compute Express Link, Tiered Memory},
numpages = {17},
publisher = {Association for Computing Machinery},
series = {SOSP '25},
title = {Demeter: A Scalable and Elastic Tiered Memory Solution for Virtualized Cloud via Guest Delegation},
url = {https://doi.org/10.1145/3731569.3764801},
venue = {Seoul, Republic of Korea},
year = {2025}
}.
├── patch/ # Kernel and hypervisor patches
├── workload/ # Workload source code
├── bench/ # Pytest-based benchmark framework
├── script/ # Data processing and visualization
├── config/ # Kernel configuration files
├── toolchain/ # Toolchains for compilation (after make -f toolchain.mk)
├── kernel/ # Kernel source trees (after make -f kernel.mk)
├── hypervisor/ # Hypervisor source trees (after make -f hypervisor.mk)
├── build/ # Built kernels (after make -f kernel.mk build)
├── bin/ # Binary assets for VMs (after make -f bin.mk)
└── *.mk # Build infrastructure makefiles