Introduction to Apache Kafka for Developers (TTDS6760)
Hands-on Kafka: Essentials, Architecture, Streaming, Producers, Consumers, Performance and More
Hands-on Kafka: Essentials, Architecture, Streaming, Producers, Consumers, Performance and More
More Information:
- Learning Style: Virtual
- Learning Style: Course
- Difficulty: Beginner
- Course Duration: 2 Days
- Course Info: Download PDF
- Certificate: See Sample
Need Training for 5 or More People?
Customized to your team's need:
- Annual Subscriptions
- Private Training
- Flexible Pricing
- Enterprise LMS
- Dedicated Customer Success Manager
Course Information
About This Course:
Apache Kafka is a real-time data pipeline processor. Its high-scalability, fault tolerance, execution speed, and fluid integrations are some of the key hallmarks that make it an integral part of many Enterprise Data architectures.
Geared for experienced Java developers, Introduction to Apache Kafka for Developers is a fast-paced, lab-intensive two day hands-on course that explores the potential of fast data and streaming systems, and how to navigate the complexities of modern streaming architectures. Throughout the course you'll explore the ins and outs of Apache Kafka and learn how it compares to other queue systems like JMS and MQ. You'll learn about Kafka's unique architecture and understand how to effectively produce and consume messages with Kafka & Zookeeper. Through hands-on labs, you'll gain experience in scaling Kafka, navigating multiple data centers, and implementing disaster recovery solutions, while exploring essential Kafka utilities.
You'll also learn the powerful Kafka APIs and become proficient in configuration parameters, Producer and Consumer APIs, as well as advanced features such as message compression and offset management. Gain hands-on with Kafka, including benchmarking Producer send modes, comparing compression schemes, and managing offsets. Experience real-world applications like Clickstream processing to solidify your expertise. Then you'll round off your Kafka journey with an in-depth look at the Kafka Streams API, monitoring, and troubleshooting techniques. You'll learn how to optimize your Kafka deployment with best practices for hardware selection, cluster sizing, and Zookeeper settings.
Course Objectives:
-
Implement and configure Apache Kafka effectively, demonstrating a deep understanding of its unique architecture, core concepts, and the differences between Kafka and other queue systems (JMS/MQ).
-
Utilize Kafka APIs proficiently, including the Producer and Consumer APIs, and apply advanced techniques such as message compression, offset management, and Producer send modes.
-
Design and develop streaming applications using the Kafka Streams API, performing complex operations like transformations, filters, joins, and aggregations, while working with KStream, KTable, and KStore concepts.
-
Monitor and troubleshoot Kafka deployments, identifying performance bottlenecks, addressing common issues, and employing best practices for hardware selection, cluster sizing, partition sizing, and Zookeeper settings.
-
Apply the skills and knowledge acquired throughout the course to real-world scenarios, showcasing the ability to develop, deploy, and optimize Kafka-based streaming applications for a variety of use cases.
Audience
-
This course is geared for experienced Java Developers and architects with Java development background who are new to Kafka. This course is not for non-developers.
Prerequisites:
-
Basic Java programming skills; practical Java development background.
-
Reasonable experience working with databases
-
Basic Linux skills and the ability to work from the Linux command line
-
Basic knowledge of Linux editors (such as VI / nano) for editing code.