Skip to content

An RND project to explore how event driven architecture works where everything handle as an event.

Notifications You must be signed in to change notification settings

arafat2020/java_kafka

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Apache Kafka Exploration - Java R&D Project

A hands-on exploration of Apache Kafka fundamentals using Java, demonstrating producer-consumer patterns, message streaming, and event-driven architecture concepts.


πŸ“‹ Table of Contents


🎯 Overview

This R&D project explores Apache Kafka's core capabilities through practical Java implementations. It demonstrates fundamental messaging patterns including:

  • Producer-Consumer Architecture: Basic message publishing and consumption
  • Keyed Messages: Understanding partitioning and message ordering
  • Asynchronous Processing: Callback-based message handling
  • Containerized Infrastructure: Docker-based Kafka cluster setup

Perfect for developers learning distributed messaging systems or exploring event-driven architecture patterns.


✨ Features

Producer Implementations

  • βœ… Basic Producer - Simple message publishing without keys
  • βœ… Keyed Producer - Partition-aware message routing using keys
  • βœ… Async Callbacks - Non-blocking message delivery with acknowledgments
  • βœ… Batch Processing - Multiple message production in loops

Consumer Implementation

  • βœ… Continuous Polling - Real-time message consumption
  • βœ… Consumer Groups - Scalable message processing
  • βœ… Offset Tracking - Message position monitoring
  • βœ… Partition Awareness - Understanding message distribution

Infrastructure

  • βœ… Docker Compose Setup - One-command Kafka cluster deployment
  • βœ… Kafka UI Dashboard - Visual monitoring and management
  • βœ… Zookeeper Integration - Cluster coordination

πŸ› οΈ Tech Stack

Technology Version Purpose
Java 17 Core programming language
Apache Kafka 4.0.0 Distributed streaming platform
Maven - Dependency management & build tool
Docker - Containerization
Kafka UI Latest Web-based Kafka management
SLF4J 2.0.17 Logging framework

πŸ“ Project Structure

apche/
β”œβ”€β”€ src/
β”‚   └── main/
β”‚       └── java/
β”‚           β”œβ”€β”€ Producer/
β”‚           β”‚   β”œβ”€β”€ ProducerInIt.java       # Basic producer implementation
β”‚           β”‚   └── ProducerWithKey.java    # Keyed message producer
β”‚           β”œβ”€β”€ Consumer/
β”‚           β”‚   └── ConsumerInIt.java       # Consumer implementation
β”‚           └── com/apche/
β”‚               └── Main.java               # Entry point
β”œβ”€β”€ compose.yaml                            # Docker Compose configuration
β”œβ”€β”€ pom.xml                                 # Maven dependencies
└── README.md                               # This file

πŸš€ Getting Started

Prerequisites

  • Java 17 or higher
  • Maven 3.6+
  • Docker & Docker Compose

Installation & Setup

  1. Clone the repository

    git clone <repository-url>
    cd apche
  2. Start Kafka infrastructure

    docker compose up -d

    This starts:

    • Zookeeper (port 2181)
    • Kafka broker (ports 9092, 29092)
    • Kafka UI (port 8080)
  3. Verify services are running

    docker compose ps
  4. Build the project

    mvn clean install

πŸ’‘ Usage Examples

Running the Basic Producer

Sends 10 messages to Test_Topic without keys:

mvn exec:java -Dexec.mainClass="Producer.ProducerInIt"

Expected Output:

βœ… Sent to topic
Test_Topic partition
0 offset
42

Running the Keyed Producer

Sends 10 keyed messages (ensures same key β†’ same partition):

mvn exec:java -Dexec.mainClass="Producer.ProducerWithKey"

Key Concept: Messages with the same key always go to the same partition, maintaining order.

Running the Consumer

Continuously polls and displays messages from Test_Topic:

mvn exec:java -Dexec.mainClass="Consumer.ConsumerInIt"

Expected Output:

KEY:ID_5, VALUE:VAL_5
Partitions:1, Offset:23

🧠 Key Concepts Explored

1. Producer Patterns

Without Keys (Round-Robin Distribution)

ProducerRecord<String, String> record =
    new ProducerRecord<>("Test_Topic", "hello world");
  • Messages distributed across partitions in round-robin fashion
  • No ordering guarantees

With Keys (Partition Affinity)

ProducerRecord<String, String> record =
    new ProducerRecord<>("Test_Topic", "ID_5", "VAL_5");
  • Same key β†’ same partition
  • Ordering guaranteed within partition

2. Asynchronous Callbacks

producer.send(record, (metadata, exception) -> {
    if (exception != null) {
        logger.info("❌ Error: " + exception.getMessage());
    } else {
        logger.info("βœ… Sent to partition " + metadata.partition());
    }
});

3. Consumer Groups

  • Multiple consumers can share workload
  • Each partition consumed by only one consumer in a group
  • Enables horizontal scaling

4. Offset Management

  • Tracks message position in partition
  • Enables replay and fault tolerance

πŸ–₯️ Kafka UI

Access the Kafka UI dashboard at http://localhost:8080

Features:

  • πŸ“Š View topics, partitions, and messages
  • πŸ” Search and filter messages
  • πŸ“ˆ Monitor consumer lag
  • βš™οΈ Manage broker configurations

Kafka UI Screenshot


πŸ“š Learning Outcomes

Through this project, I explored:

βœ… Kafka Architecture - Brokers, topics, partitions, and replicas
βœ… Producer API - Synchronous vs asynchronous sending
βœ… Consumer API - Polling, offsets, and consumer groups
βœ… Message Ordering - Key-based partitioning strategies
βœ… Docker Orchestration - Multi-container Kafka setup
βœ… Monitoring - Using Kafka UI for operational insights


πŸ”§ Configuration Details

Kafka Broker Configuration

Property Value Description
KAFKA_BROKER_ID 1 Unique broker identifier
KAFKA_ZOOKEEPER_CONNECT zookeeper:2181 Zookeeper connection
KAFKA_ADVERTISED_LISTENERS PLAINTEXT://kafka:9092
PLAINTEXT_HOST://localhost:29092
Internal & external listeners
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR 1 Offset topic replication

Producer Configuration

properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

Consumer Configuration

properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "earliest");

πŸ›‘ Stopping the Infrastructure

docker compose down

To remove volumes as well:

docker compose down -v

🀝 Contributing

This is an R&D project for learning purposes. Feel free to fork and experiment!


πŸ“ License

This project is open source and available for educational purposes.


πŸ”— Useful Resources


Built with β˜• and curiosity

About

An RND project to explore how event driven architecture works where everything handle as an event.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages