Why is Kafka so fast

 5 minutes to read

Why Kafka’s speed is so fast

We all know that KAFKA is stored based on disks, but Kafka officially claims that it has the characteristics of high performance, high throughput, and low latency. Its throughput is tens of millions.

It is generally believed that reading and writing data on the disk will reduce performance, because the addressing will be more time -consuming, so how does Kafka do its throughput tens of millions?

Kafka’s high performance is underlying design, distributed partition storage, efficient use of disk and operating system characteristics, batch compression, etc., and the result is comprehensive.
1. Use partition to implement parallel processing

We know that Kafka is a PUB-SUB message system. Whether it is published or subscribed, it must specify topic. Topic is just a logical concept. Each topic contains one or more partitions, and different partitions can be located at different nodes.

On the one hand, because different partitions can be located at different machines, they can make full use of cluster advantages to achieve parallel treatment between machines.

On the other hand, because partition corresponds to a folder in physical, even if multiple partitions are located at the same node, they can also place different partitions on the same node on different disks to achieve parallel processing between disks and give full play to full play. The advantages of more disk.

2. Write disk in order

Due to the characteristics of the storage medium, the disk itself is slower than the main deposit. Coupled with the cost of mechanical movement, the access speed of the disk is often one percent of the main memory. What about reading and writing efficiency?

There are two methods:

    1. Pre -read or read in advance;
    1. Merge writing -multiple logical writing operations merge into a large physical writing operation; that is, read and write with disk sequence (no need to find time, only a few rotation time). Just add data to the end of the file, not to modify the data at the random position of the file. Each partition in KAFKA is an orderly, unable variable message sequence, and new messages are continuously added to the end of Partition. This is the order writing.
3. Use Page Cache+Mmap

Even if the disk is written in order, the access speed of the disk cannot be caught up with memory. Therefore, Kafka’s data is not in real -time writing hard disk. It makes full use of the paging storage of modern operating systems to improve I/O efficiency.

Page Cache (OS Cache): [Caches managed by the operating system by itself] is based on Page, cache file content.

MMAP is Memory Mapped Files memory file mapping. MMAP is actually a layer of mapping some addresses of the physical disk files with the Page Cache address, so that the process reads and write memory like reading and writing disks. OS helps us write the data to write the data. To the disk

When Kafka is written to the disk file, you can write directly into this OS Cache, that is, just write it in the memory, Next, the operating system decides when the data in OS Cache really brushes into the disk file. Through this step, You can improve the performance of disk files a lot, because it is actually equivalent to writing memory, not writing disks.

4. zero copy

What optimization does Kafka make when reading the disk?

  • Realize Zero Copy based on Sendfile

Linux2.4+ kernel provides zero copy through Sendfile system calls. After the data is copied to the kernel Buffer through DMA, copy it directly to the Nio Buffer through DMA, without CPU copy. This is also the source of the statement of zero copy. In addition to reducing the data copy, because the network sending the entire reading file is completed by a Sendfile, the whole process is only switched by two contexts, so it greatly improves performance.

Kafka’s solution here is to call the operating system to implement zero copy through NIO’s Transferto/TransferFrom, which is used to call the operating system. A total of 2 kernel data copy occurred, two context switching and system calls 2 times, eliminating the CPU data copy.

5. Batch processing

In many cases, the bottleneck of the system is not a CPU or disk, but a network IO. Therefore, in addition to the low -level batch processing provided by the operating system, Kafka’s client and Broker will also send data through the network. Multiple records (including reading and writing) are accumulated in one batch. The batch processing of the records allocate the overhead of the network, and use a larger data packet to improve the bandwidth utilization rate.

6. Data compression

Producer can reduce the data and send it to Broker, thereby reducing the cost of network transmission. The currently supported compression algorithms are: Snappy, GZIP, LZ4, ZSTD. Data compression is generally used as optimization methods with batch processing.

Summarize why Kafka is so fast:

  • Partition parallel processing
  • Write disk in order to make full use of the disk characteristics
  • Use the modern operating system paging Page Cache to use memory to improve I/O efficiency
  • Use zero copy technology The data produced by
  • Producer is durable to Broker, and the MMAP file is mapped to realize the quick writing of the order
  • Customer read data from Broker, use Sendfile system call, read the disk file into the OS kernel buffer, and transfer it to Nio Buffer for network sending to reduce the CPU consumption consumption
  • Turn all the messages into a batch file, and make reasonable batch compression to reduce network IO loss.