A hands-on exploration of Apache Kafka fundamentals using Java, demonstrating producer-consumer patterns, message streaming, and event-driven architecture concepts.
- Overview
- Features
- Tech Stack
- Project Structure
- Getting Started
- Usage Examples
- Key Concepts Explored
- Kafka UI
- Learning Outcomes
This R&D project explores Apache Kafka's core capabilities through practical Java implementations. It demonstrates fundamental messaging patterns including:
- Producer-Consumer Architecture: Basic message publishing and consumption
- Keyed Messages: Understanding partitioning and message ordering
- Asynchronous Processing: Callback-based message handling
- Containerized Infrastructure: Docker-based Kafka cluster setup
Perfect for developers learning distributed messaging systems or exploring event-driven architecture patterns.
- β Basic Producer - Simple message publishing without keys
- β Keyed Producer - Partition-aware message routing using keys
- β Async Callbacks - Non-blocking message delivery with acknowledgments
- β Batch Processing - Multiple message production in loops
- β Continuous Polling - Real-time message consumption
- β Consumer Groups - Scalable message processing
- β Offset Tracking - Message position monitoring
- β Partition Awareness - Understanding message distribution
- β Docker Compose Setup - One-command Kafka cluster deployment
- β Kafka UI Dashboard - Visual monitoring and management
- β Zookeeper Integration - Cluster coordination
| Technology | Version | Purpose |
|---|---|---|
| Java | 17 | Core programming language |
| Apache Kafka | 4.0.0 | Distributed streaming platform |
| Maven | - | Dependency management & build tool |
| Docker | - | Containerization |
| Kafka UI | Latest | Web-based Kafka management |
| SLF4J | 2.0.17 | Logging framework |
apche/
βββ src/
β βββ main/
β βββ java/
β βββ Producer/
β β βββ ProducerInIt.java # Basic producer implementation
β β βββ ProducerWithKey.java # Keyed message producer
β βββ Consumer/
β β βββ ConsumerInIt.java # Consumer implementation
β βββ com/apche/
β βββ Main.java # Entry point
βββ compose.yaml # Docker Compose configuration
βββ pom.xml # Maven dependencies
βββ README.md # This file
- Java 17 or higher
- Maven 3.6+
- Docker & Docker Compose
-
Clone the repository
git clone <repository-url> cd apche
-
Start Kafka infrastructure
docker compose up -d
This starts:
- Zookeeper (port
2181) - Kafka broker (ports
9092,29092) - Kafka UI (port
8080)
- Zookeeper (port
-
Verify services are running
docker compose ps
-
Build the project
mvn clean install
Sends 10 messages to Test_Topic without keys:
mvn exec:java -Dexec.mainClass="Producer.ProducerInIt"Expected Output:
β
Sent to topic
Test_Topic partition
0 offset
42
Sends 10 keyed messages (ensures same key β same partition):
mvn exec:java -Dexec.mainClass="Producer.ProducerWithKey"Key Concept: Messages with the same key always go to the same partition, maintaining order.
Continuously polls and displays messages from Test_Topic:
mvn exec:java -Dexec.mainClass="Consumer.ConsumerInIt"Expected Output:
KEY:ID_5, VALUE:VAL_5
Partitions:1, Offset:23
ProducerRecord<String, String> record =
new ProducerRecord<>("Test_Topic", "hello world");- Messages distributed across partitions in round-robin fashion
- No ordering guarantees
ProducerRecord<String, String> record =
new ProducerRecord<>("Test_Topic", "ID_5", "VAL_5");- Same key β same partition
- Ordering guaranteed within partition
producer.send(record, (metadata, exception) -> {
if (exception != null) {
logger.info("β Error: " + exception.getMessage());
} else {
logger.info("β
Sent to partition " + metadata.partition());
}
});- Multiple consumers can share workload
- Each partition consumed by only one consumer in a group
- Enables horizontal scaling
- Tracks message position in partition
- Enables replay and fault tolerance
Access the Kafka UI dashboard at http://localhost:8080
Features:
- π View topics, partitions, and messages
- π Search and filter messages
- π Monitor consumer lag
- βοΈ Manage broker configurations
Through this project, I explored:
β
Kafka Architecture - Brokers, topics, partitions, and replicas
β
Producer API - Synchronous vs asynchronous sending
β
Consumer API - Polling, offsets, and consumer groups
β
Message Ordering - Key-based partitioning strategies
β
Docker Orchestration - Multi-container Kafka setup
β
Monitoring - Using Kafka UI for operational insights
| Property | Value | Description |
|---|---|---|
KAFKA_BROKER_ID |
1 | Unique broker identifier |
KAFKA_ZOOKEEPER_CONNECT |
zookeeper:2181 | Zookeeper connection |
KAFKA_ADVERTISED_LISTENERS |
PLAINTEXT://kafka:9092 PLAINTEXT_HOST://localhost:29092 |
Internal & external listeners |
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR |
1 | Offset topic replication |
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "earliest");docker compose downTo remove volumes as well:
docker compose down -vThis is an R&D project for learning purposes. Feel free to fork and experiment!
This project is open source and available for educational purposes.
Built with β and curiosity
