• Stars
    star
    27,068
  • Rank 682 (Top 0.02 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created over 12 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Mirror of Apache Kafka

Apache Kafka

See our web site for details on the project.

You need to have Java installed.

We build and test Apache Kafka with Java 8, 11, 17 and 21. We set the release parameter in javac and scalac to 8 to ensure the generated binaries are compatible with Java 8 or higher (independently of the Java version used for compilation). Java 8 support project-wide has been deprecated since Apache Kafka 3.0, Java 11 support for the broker and tools has been deprecated since Apache Kafka 3.7 and removal of both is planned for Apache Kafka 4.0 ( see KIP-750 and KIP-1013 for more details).

Scala 2.12 and 2.13 are supported and 2.13 is used by default. Scala 2.12 support has been deprecated since Apache Kafka 3.0 and will be removed in Apache Kafka 4.0 (see KIP-751 for more details). See below for how to use a specific Scala version or all of the supported Scala versions.

Build a jar and run it

./gradlew jar

Follow instructions in https://kafka.apache.org/quickstart

Build source jar

./gradlew srcJar

Build aggregated javadoc

./gradlew aggregatedJavadoc

Build javadoc and scaladoc

./gradlew javadoc
./gradlew javadocJar # builds a javadoc jar for each module
./gradlew scaladoc
./gradlew scaladocJar # builds a scaladoc jar for each module
./gradlew docsJar # builds both (if applicable) javadoc and scaladoc jars for each module

Run unit/integration tests

./gradlew test # runs both unit and integration tests
./gradlew unitTest
./gradlew integrationTest

Force re-running tests without code change

./gradlew test --rerun
./gradlew unitTest --rerun
./gradlew integrationTest --rerun

Running a particular unit/integration test

./gradlew clients:test --tests RequestResponseTest

Repeatedly running a particular unit/integration test

I=0; while ./gradlew clients:test --tests RequestResponseTest --rerun --fail-fast; do (( I=$I+1 )); echo "Completed run: $I"; sleep 1; done

Running a particular test method within a unit/integration test

./gradlew core:test --tests kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic
./gradlew clients:test --tests org.apache.kafka.clients.MetadataTest.testTimeToNextUpdate

Running a particular unit/integration test with log4j output

By default, there will be only small number of logs output while testing. You can adjust it by changing the log4j.properties file in the module's src/test/resources directory.

For example, if you want to see more logs for clients project tests, you can modify the line in clients/src/test/resources/log4j.properties to log4j.logger.org.apache.kafka=INFO and then run:

./gradlew cleanTest clients:test --tests NetworkClientTest   

And you should see INFO level logs in the file under the clients/build/test-results/test directory.

Specifying test retries

By default, each failed test is retried once up to a maximum of five retries per test run. Tests are retried at the end of the test task. Adjust these parameters in the following way:

./gradlew test -PmaxTestRetries=1 -PmaxTestRetryFailures=5

See Test Retry Gradle Plugin for more details.

Generating test coverage reports

Generate coverage reports for the whole project:

./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false

Generate coverage for a single module, i.e.:

./gradlew clients:reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false

Building a binary release gzipped tar ball

./gradlew clean releaseTarGz

The release file can be found inside ./core/build/distributions/.

Building auto generated messages

Sometimes it is only necessary to rebuild the RPC auto-generated message data when switching between branches, as they could fail due to code changes. You can just run:

./gradlew processMessages processTestMessages

Running a Kafka broker in KRaft mode

Using compiled files:

KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)"
./bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
./bin/kafka-server-start.sh config/kraft/server.properties

Using docker image:

docker run -p 9092:9092 apache/kafka:3.7.0

Running a Kafka broker in ZooKeeper mode

Using compiled files:

./bin/zookeeper-server-start.sh config/zookeeper.properties
./bin/kafka-server-start.sh config/server.properties

Since ZooKeeper mode is already deprecated and planned to be removed in Apache Kafka 4.0, the docker image only supports running in KRaft mode

Cleaning the build

./gradlew clean

Running a task with one of the Scala versions available (2.12.x or 2.13.x)

Note that if building the jars with a version other than 2.13.x, you need to set the SCALA_VERSION variable or change it in bin/kafka-run-class.sh to run the quick start.

You can pass either the major version (eg 2.12) or the full version (eg 2.12.7):

./gradlew -PscalaVersion=2.12 jar
./gradlew -PscalaVersion=2.12 test
./gradlew -PscalaVersion=2.12 releaseTarGz

Running a task with all the scala versions enabled by default

Invoke the gradlewAll script followed by the task(s):

./gradlewAll test
./gradlewAll jar
./gradlewAll releaseTarGz

Running a task for a specific project

This is for core, examples and clients

./gradlew core:jar
./gradlew core:test

Streams has multiple sub-projects, but you can run all the tests:

./gradlew :streams:testAll

Listing all gradle tasks

./gradlew tasks

Building IDE project

Note that this is not strictly necessary (IntelliJ IDEA has good built-in support for Gradle projects, for example).

./gradlew eclipse
./gradlew idea

The eclipse task has been configured to use ${project_dir}/build_eclipse as Eclipse's build directory. Eclipse's default build directory (${project_dir}/bin) clashes with Kafka's scripts directory and we don't use Gradle's build directory to avoid known issues with this configuration.

Publishing the jar for all versions of Scala and for all projects to maven

The recommended command is:

./gradlewAll publish

For backwards compatibility, the following also works:

./gradlewAll uploadArchives

Please note for this to work you should create/update ${GRADLE_USER_HOME}/gradle.properties (typically, ~/.gradle/gradle.properties) and assign the following variables

mavenUrl=
mavenUsername=
mavenPassword=
signing.keyId=
signing.password=
signing.secretKeyRingFile=

Publishing the streams quickstart archetype artifact to maven

For the Streams archetype project, one cannot use gradle to upload to maven; instead the mvn deploy command needs to be called at the quickstart folder:

cd streams/quickstart
mvn deploy

Please note for this to work you should create/update user maven settings (typically, ${USER_HOME}/.m2/settings.xml) to assign the following variables

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                       https://maven.apache.org/xsd/settings-1.0.0.xsd">
...                           
<servers>
   ...
   <server>
      <id>apache.snapshots.https</id>
      <username>${maven_username}</username>
      <password>${maven_password}</password>
   </server>
   <server>
      <id>apache.releases.https</id>
      <username>${maven_username}</username>
      <password>${maven_password}</password>
    </server>
    ...
 </servers>
 ...

Installing ALL the jars to the local Maven repository

The recommended command to build for both Scala 2.12 and 2.13 is:

./gradlewAll publishToMavenLocal

For backwards compatibility, the following also works:

./gradlewAll install

Installing specific projects to the local Maven repository

./gradlew -PskipSigning=true :streams:publishToMavenLocal

If needed, you can specify the Scala version with -PscalaVersion=2.13.

Building the test jar

./gradlew testJar

Running code quality checks

There are two code quality analysis tools that we regularly run, spotbugs and checkstyle.

Checkstyle

Checkstyle enforces a consistent coding style in Kafka. You can run checkstyle using:

./gradlew checkstyleMain checkstyleTest

The checkstyle warnings will be found in reports/checkstyle/reports/main.html and reports/checkstyle/reports/test.html files in the subproject build directories. They are also printed to the console. The build will fail if Checkstyle fails.

Spotbugs

Spotbugs uses static analysis to look for bugs in the code. You can run spotbugs using:

./gradlew spotbugsMain spotbugsTest -x test

The spotbugs warnings will be found in reports/spotbugs/main.html and reports/spotbugs/test.html files in the subproject build directories. Use -PxmlSpotBugsReport=true to generate an XML report instead of an HTML one.

JMH microbenchmarks

We use JMH to write microbenchmarks that produce reliable results in the JVM.

See jmh-benchmarks/README.md for details on how to run the microbenchmarks.

Dependency Analysis

The gradle dependency debugging documentation mentions using the dependencies or dependencyInsight tasks to debug dependencies for the root project or individual subprojects.

Alternatively, use the allDeps or allDepInsight tasks for recursively iterating through all subprojects:

./gradlew allDeps

./gradlew allDepInsight --configuration runtimeClasspath --dependency com.fasterxml.jackson.core:jackson-databind

These take the same arguments as the builtin variants.

Determining if any dependencies could be updated

./gradlew dependencyUpdates

Common build options

The following options should be set with a -P switch, for example ./gradlew -PmaxParallelForks=1 test.

  • commitId: sets the build commit ID as .git/HEAD might not be correct if there are local commits added for build purposes.
  • mavenUrl: sets the URL of the maven deployment repository (file://path/to/repo can be used to point to a local repository).
  • maxParallelForks: maximum number of test processes to start in parallel. Defaults to the number of processors available to the JVM.
  • maxScalacThreads: maximum number of worker threads for the scalac backend. Defaults to the lowest of 8 and the number of processors available to the JVM. The value must be between 1 and 16 (inclusive).
  • ignoreFailures: ignore test failures from junit
  • showStandardStreams: shows standard out and standard error of the test JVM(s) on the console.
  • skipSigning: skips signing of artifacts.
  • testLoggingEvents: unit test events to be logged, separated by comma. For example ./gradlew -PtestLoggingEvents=started,passed,skipped,failed test.
  • xmlSpotBugsReport: enable XML reports for spotBugs. This also disables HTML reports as only one can be enabled at a time.
  • maxTestRetries: maximum number of retries for a failing test case.
  • maxTestRetryFailures: maximum number of test failures before retrying is disabled for subsequent tests.
  • enableTestCoverage: enables test coverage plugins and tasks, including bytecode enhancement of classes required to track said coverage. Note that this introduces some overhead when running tests and hence why it's disabled by default (the overhead varies, but 15-20% is a reasonable estimate).
  • keepAliveMode: configures the keep alive mode for the Gradle compilation daemon - reuse improves start-up time. The values should be one of daemon or session (the default is daemon). daemon keeps the daemon alive until it's explicitly stopped while session keeps it alive until the end of the build session. This currently only affects the Scala compiler, see gradle/gradle#21034 for a PR that attempts to do the same for the Java compiler.
  • scalaOptimizerMode: configures the optimizing behavior of the scala compiler, the value should be one of none, method, inline-kafka or inline-scala (the default is inline-kafka). none is the scala compiler default, which only eliminates unreachable code. method also includes method-local optimizations. inline-kafka adds inlining of methods within the kafka packages. Finally, inline-scala also includes inlining of methods within the scala library (which avoids lambda allocations for methods like Option.exists). inline-scala is only safe if the Scala library version is the same at compile time and runtime. Since we cannot guarantee this for all cases (for example, users may depend on the kafka jar for integration tests where they may include a scala library with a different version), we don't enable it by default. See https://www.lightbend.com/blog/scala-inliner-optimizer for more details.

Running system tests

See tests/README.md.

Running in Vagrant

See vagrant/README.md.

Contribution

Apache Kafka is interested in building the community; we would welcome any thoughts or patches. You can reach us on the Apache mailing lists.

To contribute follow the instructions here:

More Repositories

1

echarts

Apache ECharts is a powerful, interactive charting and data visualization library for browser
TypeScript
56,321
star
2

superset

Apache Superset is a Data Visualization and Data Exploration Platform
TypeScript
55,051
star
3

dubbo

The java implementation of Apache Dubbo. An RPC and microservice framework.
Java
39,899
star
4

spark

Apache Spark - A unified analytics engine for large-scale data processing
Scala
36,719
star
5

airflow

Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Python
33,999
star
6

incubator-seata

πŸ”₯ Seata is an easy-to-use, high-performance, open source distributed transaction solution.
Java
24,602
star
7

skywalking

APM, Application Performance Monitoring System
Java
22,610
star
8

flink

Apache Flink
Java
22,197
star
9

mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++
20,572
star
10

shardingsphere

Distributed SQL transaction & query engine for data sharding, scaling, encryption, and more - on any database.
Java
19,287
star
11

rocketmq

Apache RocketMQ is a cloud native messaging and streaming platform, making it simple to build event-driven applications.
Java
18,578
star
12

brpc

brpc is an Industrial-grade RPC framework using C++ Language, which is often used in high performance system such as Search, Storage, Machine learning, Advertisement, Recommendation etc. "brpc" means "better RPC".
C++
14,261
star
13

incubator-weex

Apache Weex (Incubating)
C++
13,884
star
14

hadoop

Apache Hadoop
Java
13,855
star
15

arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
C++
13,226
star
16

pulsar

Apache Pulsar - distributed pub-sub messaging system
Java
13,062
star
17

druid

Apache Druid: a high performance real-time analytics database.
Java
12,843
star
18

predictionio

PredictionIO, a machine learning server for developers and ML engineers.
Scala
12,556
star
19

zookeeper

Apache ZooKeeper
Java
11,532
star
20

doris

Apache Doris is an easy-to-use, high performance and unified analytics database.
Java
11,180
star
21

apisix

The Cloud-Native API Gateway
Lua
11,031
star
22

incubator-answer

A Q&A platform software for teams at any scales. Whether it's a community forum, help center, or knowledge management platform, you can always count on Apache Answer.
Go
10,948
star
23

thrift

Apache Thrift
C++
9,900
star
24

dolphinscheduler

Apache DolphinScheduler is the modern data workflow orchestration platform with powerful user interface, dedicated to solving complex task dependencies in the data pipeline and providing various types of jobs available `out of the box`
Java
9,619
star
25

cassandra

Mirror of Apache Cassandra
Java
8,187
star
26

shardingsphere-elasticjob

Distributed scheduled job
Java
7,965
star
27

tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators
Python
7,828
star
28

shenyu

Apache ShenYu is a Java native API Gateway for service proxy, protocol conversion and API governance.
Java
7,541
star
29

beam

Apache Beam is a unified programming model for Batch and Streaming data processing.
Java
7,410
star
30

jmeter

Apache JMeter open-source load testing tool for analyzing and measuring the performance of a variety of services
Java
7,335
star
31

tomcat

Apache Tomcat
Java
6,926
star
32

storm

Apache Storm
Java
6,480
star
33

zeppelin

Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more.
Java
6,162
star
34

openwhisk

Apache OpenWhisk is an open source serverless cloud platform
Scala
6,130
star
35

couchdb

Seamless multi-master syncing database with an intuitive HTTP/JSON API, designed for reliability
Erlang
5,810
star
36

incubator-kie-drools

Drools is a rule engine, DMN engine and complex event processing (CEP) engine for Java.
Java
5,542
star
37

iceberg

Apache Iceberg
Java
5,504
star
38

dubbo-spring-boot-project

Spring Boot Project for Apache Dubbo
Java
5,355
star
39

mesos

Apache Mesos
Java
5,111
star
40

hive

Apache Hive
Java
5,053
star
41

camel

Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.
Java
5,042
star
42

groovy

Apache Groovy: A powerful multi-faceted programming language for the JVM platform
Java
4,977
star
43

hbase

Apache HBase
Java
4,971
star
44

incubator-weex-ui

πŸ„ A rich interaction, lightweight, high performance UI library based on Weex.
Vue
4,788
star
45

pinot

Apache Pinot - A realtime distributed OLAP datastore
Java
4,766
star
46

ignite

Apache Ignite
Java
4,548
star
47

rocketmq-externals

Mirror of Apache RocketMQ (Incubating)
Java
4,474
star
48

seatunnel

SeaTunnel is a distributed, high-performance data integration platform for the synchronization and transformation of massive data (offline & real-time).
Java
4,459
star
49

incubator-pagespeed-ngx

Automatic PageSpeed optimization module for Nginx
C++
4,381
star
50

lucene-solr

Apache Lucene and Solr open-source search software
4,349
star
51

dubbo-go

Go Implementation For Apache Dubbo
Go
4,315
star
52

shiro

Apache Shiro
Java
4,164
star
53

calcite

Apache Calcite
Java
4,028
star
54

nifi

Apache NiFi
Java
4,006
star
55

maven

Apache Maven core
Java
3,836
star
56

hudi

Upserts, Deletes And Incremental Processing on Big Data.
Java
3,804
star
57

dubbo-admin

The ops and reference implementation for Apache Dubbo
Java
3,707
star
58

incubator-heron

Apache Heron (Incubating) is a realtime, distributed, fault-tolerant stream processing engine from Twitter
Java
3,646
star
59

cordova-android

Apache Cordova Android
JavaScript
3,539
star
60

kylin

Apache Kylin
Java
3,526
star
61

httpd

Mirror of Apache HTTP Server. Issues: http://issues.apache.org
C
3,376
star
62

linkis

Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
Java
3,224
star
63

incubator-kie-optaplanner

AI constraint solver in Java to optimize the vehicle routing problem, employee rostering, task assignment, maintenance scheduling, conference scheduling and other planning problems.
Java
3,152
star
64

logging-log4j2

Apache Log4j 2 is a versatile, feature-rich, efficient logging API and backend for Java.
Java
3,151
star
65

iotdb

Apache IoTDB
Java
3,000
star
66

arrow-datafusion

Apache Arrow DataFusion SQL Query Engine
Rust
2,998
star
67

curator

Apache Curator
Java
2,997
star
68

singa

a distributed deep learning platform
C++
2,968
star
69

incubator-streampark

StreamPark, Make stream processing easier! easy-to-use streaming application development framework and operation platform
Scala
2,820
star
70

nutch

Apache Nutch is an extensible and scalable web crawler
Java
2,653
star
71

guacamole-server

Mirror of Apache Guacamole Server
C
2,601
star
72

avro

Apache Avro is a data serialization system.
Java
2,571
star
73

incubator-fury

A blazingly fast multi-language serialization framework powered by JIT and zero-copy.
Java
2,570
star
74

commons-lang

Apache Commons Lang
Java
2,539
star
75

netbeans

Apache NetBeans
Java
2,532
star
76

flume

Mirror of Apache Flume
Java
2,458
star
77

incubator-devlake

Apache DevLake is an open-source dev data platform to ingest, analyze, and visualize the fragmented data from DevOps tools, extracting insights for engineering excellence, developer experience, and community growth.
Go
2,424
star
78

geode

Apache Geode
Java
2,244
star
79

incubator-seata-samples

seata-samples
Java
2,196
star
80

maven-mvnd

Apache Maven Daemon
Java
2,191
star
81

activemq

Mirror of Apache ActiveMQ
Java
2,184
star
82

gobblin

A distributed data integration framework that simplifies common aspects of big data integration such as data ingestion, replication, organization and lifecycle management for both streaming and batch data ecosystems.
Java
2,183
star
83

incubator-hugegraph

A graph database that supports more than 100+ billion data, high performance and scalability (Include OLTP Engine & REST-API & Backends)
Java
2,156
star
84

parquet-mr

Apache Parquet
Java
2,153
star
85

incubator-kvrocks

Kvrocks is a distributed key value NoSQL database that uses RocksDB as storage engine and is compatible with Redis protocol.
C++
2,147
star
86

pdfbox

Mirror of Apache PDFBox
Java
2,131
star
87

cordova-ios

Apache Cordova iOS
JavaScript
2,130
star
88

mahout

Mirror of Apache Mahout
Java
2,095
star
89

lucenenet

Apache Lucene.NET
C#
2,031
star
90

ambari

Apache Ambari simplifies provisioning, managing, and monitoring of Apache Hadoop clusters.
Java
1,990
star
91

libcloud

Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API.
Python
1,956
star
92

incubator-pegasus

Apache Pegasus - A horizontally scalable, strongly consistent and high-performance key-value store
C++
1,943
star
93

tika

The Apache Tika toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF).
Java
1,860
star
94

drill

Apache Drill is a distributed MPP query layer for self describing data
Java
1,841
star
95

dubbo-samples

samples for Apache Dubbo
Java
1,839
star
96

servicecomb-java-chassis

ServiceComb Java Chassis is a Software Development Kit (SDK) for rapid development of microservices in Java, providing service registration, service discovery, dynamic routing, and service management features
Java
1,829
star
97

tinkerpop

Apache TinkerPop - a graph computing framework
Java
1,825
star
98

bookkeeper

Apache BookKeeper - a scalable, fault tolerant and low latency storage service optimized for append-only workloads
Java
1,785
star
99

rocketmq-spring

Apache RocketMQ Spring Integration
Java
1,775
star
100

kudu

Mirror of Apache Kudu
C++
1,743
star