-
Notifications
You must be signed in to change notification settings - Fork 79
Synchronous (Zero-Copy) Outlet Mode #256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
a33e2a2 to
0f7bdc2
Compare
f7e32c2 to
ce58eb6
Compare
|
This is awesome, I will have to try it out! I have relatively low sample rate, but I have the additional burden of thread communication in my app, so the pointer to the data struct can be sent directly instead of a locked, reusable buffer per stream, this should reduce the overhead considerably (e.g. 1/2 to 1/4 number of ops, which actually would scale with the number of consumers in my case despite the slight increase in latency, due to it not needing to be copied for each For reference This is a general performance test I run on my Dart API wrapper. Despite the numbers being slightly higher in the new version, I wouldn't take that as indication of a performance regression, but it is a useful point of reference to make sure the performance wont be worse when I test the zero-copy mode. Liblsl.dart performance test results, liblsl v1.16.2
Liblsl.dart performance test results, liblsl v1.17.5
|
1. User creates outlet with transp_sync_blocking flag:
lsl::stream_outlet outlet(info, 0, 360, transp_sync_blocking);
2. When a consumer connects, the socket is handed off from client_session to sync_write_handler after the feed header handshake (no transfer thread is spawned).
3. When push_sample() is called:
- Timestamp is encoded and stored in sync_timestamps_
- User's data buffer pointer is wrapped in asio::const_buffer (zero copy)
- If pushthrough=true, all buffers are written to all consumers via blocking gather-write
…per consistent with stream_info and properly throws on construction failure.
- Fix have_consumers()/wait_for_consumers() to detect sync consumers - Handle DEDUCED_TIMESTAMP in sync mode for proper chunk timing - Change sync_timestamps_ to deque to prevent pointer invalidation - Add optimized enqueue_chunk_sync() for batched chunk transfers - Add have_sync_consumers() to tcp_server - Add sync outlet tests and benchmark tool
… in the namespace) 2. Added #include <algorithm> for std::sort
…ync<std::string>, which resolves the Windows linker error
2. Replaced C++17 structured bindings with .first/.second pair access for C++11/14 compatibility
ce58eb6 to
d5ae90c
Compare
This PR adds a new transp_sync_blocking transport flag that enables synchronous, zero-copy data transfer for stream outlets. Instead of copying sample data into an internal buffer for async delivery, sync mode writes directly from the user's buffer to connected sockets, eliminating memory allocation and copy overhead.
It is intended to be a replacement for #170 , which has not been updated in some time.
Motivation
For high-channel-count, high-sample-rate applications (e.g., 1000+ channels at 30kHz), the async outlet's per-sample memory allocation and copying becomes a significant CPU bottleneck. Sync mode addresses this by:
This makes all the difference for me on a lower power embedded system
Usage
Limitations
Benchmark Results
Test configuration: 1000 channels, 30kHz sample rate, macOS (Apple Silicon)
CPU Usage by Chunk Size (1 consumer)
Scaling with Multiple Consumers (chunk=4)
CPU savings remain significant (~50%) across consumer counts. However, push latency increases linearly with consumers in sync mode (async latency stays constant).
Implementation Details