Apache Kafka Practical Recipes

Raul Estrada

  • 出版商: Packt Publishing
  • 出版日期: 2017-12-21
  • 售價: $1,520
  • 貴賓價: 9.5$1,444
  • 語言: 英文
  • 頁數: 250
  • 裝訂: Paperback
  • ISBN: 1787286843
  • ISBN-13: 9781787286849
  • 相關分類: Message Queue
  • 下單後立即進貨 (約3~4週)

商品描述

Key Features

  • Use Kafka to build efficient streaming data applications to process your data
  • Integrate Kafka with other Big Data tools such as Hadoop, Spark and more
  • Hands-on recipes to help you design, operate, maintain, and secure your Apache Kafka cluster with ease.

Book Description

Apache Kafka aims to provide a unified, high-throughput, low-latency platform for handling our real-time data feeds. This book will show the readers how Kafka can be used as an efficient enterprise messaging service, and contains practical solutions to the common problems the developers and administrators might face while working with it.

Starting right from configuring the basic Kafka APIs, the book covers recipes on setting up Kafka clusters as well as the basic Kafka operations. You will learn to configure producers and consumers for optimal performance, set up tools for maintaining and operating Apache Kafka. The book contains recipes for building real-time streaming data pipelines to get data between systems/applications, or building real-time streaming applications that process streams of data, in a very easy to understand manner. You will also learn how to monitor Kafka using tools such as Graphite and Ganglia. Finally, you will understand how Apache Kafka can be used by several third party tools for Big Data processing, such as Apache Spark, Hadoop, and more.

By the end of this book, you will have all the knowledge you need to take your understanding of Apache Kafka to the next level, and to tackle any problem you might encounter while working with it.

What you will learn

  • Configure, operate and monitor Kafka in the most efficient ways possible.
  • All about Kafka: Consumers and Producers
  • Design effective streaming applications with Kafka using Spark, Hadoop.
  • Reach high availability with Kafka Clusters
  • Dominate the new Confluent platform.
  • Understand and implement the best practices in managing and securing Kafka
  • Integrate third party tools like Spark , Hadoop, Elastic Search, and others with Kafka.