• Stars
    star
    5,380
  • Rank 7,667 (Top 0.2 %)
  • Language
  • License
    BSD 3-Clause "New...
  • Created over 6 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Learn where some of the network sysctl variables fit into the Linux/Kernel network flow. Translations: 🇷🇺

TOC

Introduction

Sometimes people are looking for sysctl cargo cult values that bring high throughput and low latency with no trade-off and that works on every occasion. That's not realistic, although we can say that the newer kernel versions are very well tuned by default. In fact, you might hurt performance if you mess with the defaults.

This brief tutorial shows where some of the most used and quoted sysctl/network parameters are located into the Linux network flow, it was heavily inspired by the illustrated guide to Linux networking stack and many of Marek Majkowski's posts.

Feel free to send corrections and suggestions! :)

Linux network queues overview

linux network queues

Fitting the sysctl variables into the Linux network flow

Ingress - they're coming

  1. Packets arrive at the NIC
  2. NIC will verify MAC (if not on promiscuous mode) and FCS and decide to drop or to continue
  3. NIC will DMA packets at RAM, in a region previously prepared (mapped) by the driver
  4. NIC will enqueue references to the packets at receive ring buffer queue rx until rx-usecs timeout or rx-frames
  5. NIC will raise a hard IRQ
  6. CPU will run the IRQ handler that runs the driver's code
  7. Driver will schedule a NAPI, clear the hard IRQ and return
  8. Driver raise a soft IRQ (NET_RX_SOFTIRQ)
  9. NAPI will poll data from the receive ring buffer until netdev_budget_usecs timeout or netdev_budget and dev_weight packets
  10. Linux will also allocate memory to sk_buff
  11. Linux fills the metadata: protocol, interface, setmacheader, removes ethernet
  12. Linux will pass the skb to the kernel stack (netif_receive_skb)
  13. It will set the network header, clone skb to taps (i.e. tcpdump) and pass it to tc ingress
  14. Packets are handled to a qdisc sized netdev_max_backlog with its algorithm defined by default_qdisc
  15. It calls ip_rcv and packets are handled to IP
  16. It calls netfilter (PREROUTING)
  17. It looks at the routing table, if forwarding or local
  18. If it's local it calls netfilter (LOCAL_IN)
  19. It calls the L4 protocol (for instance tcp_v4_rcv)
  20. It finds the right socket
  21. It goes to the tcp finite state machine
  22. Enqueue the packet to the receive buffer and sized as tcp_rmem rules
    1. If tcp_moderate_rcvbuf is enabled kernel will auto-tune the receive buffer
  23. Kernel will signalize that there is data available to apps (epoll or any polling system)
  24. Application wakes up and reads the data

Egress - they're leaving

  1. Application sends message (sendmsg or other)
  2. TCP send message allocates skb_buff
  3. It enqueues skb to the socket write buffer of tcp_wmem size
  4. Builds the TCP header (src and dst port, checksum)
  5. Calls L3 handler (in this case ipv4 on tcp_write_xmit and tcp_transmit_skb)
  6. L3 (ip_queue_xmit) does its work: build ip header and call netfilter (LOCAL_OUT)
  7. Calls output route action
  8. Calls netfilter (POST_ROUTING)
  9. Fragment the packet (ip_output)
  10. Calls L2 send function (dev_queue_xmit)
  11. Feeds the output (QDisc) queue of txqueuelen length with its algorithm default_qdisc
  12. The driver code enqueue the packets at the ring buffer tx
  13. The driver will do a soft IRQ (NET_TX_SOFTIRQ) after tx-usecs timeout or tx-frames
  14. Re-enable hard IRQ to NIC
  15. Driver will map all the packets (to be sent) to some DMA'ed region
  16. NIC fetches the packets (via DMA) from RAM to transmit
  17. After the transmission NIC will raise a hard IRQ to signal its completion
  18. The driver will handle this IRQ (turn it off)
  19. And schedule (soft IRQ) the NAPI poll system
  20. NAPI will handle the receive packets signaling and free the RAM

How to check - perf

If you want to see the network tracing within Linux you can use perf.

docker run -it --rm --cap-add SYS_ADMIN --entrypoint bash ljishen/perf
apt-get update
apt-get install iputils-ping

# this is going to trace all events (not syscalls) to the subsytem net:* while performing the ping
perf trace --no-syscalls --event 'net:*' ping globo.com -c1 > /dev/null

perf trace network

What, Why and How - network and sysctl parameters

Ring Buffer - rx,tx

  • What - the driver receive/send queue a single or multiple queues with a fixed size, usually implemented as FIFO, it is located at RAM
  • Why - buffer to smoothly accept bursts of connections without dropping them, you might need to increase these queues when you see drops or overrun, aka there are more packets coming than the kernel is able to consume them, the side effect might be increased latency.
  • How:
    • Check command: ethtool -g ethX
    • Change command: ethtool -G ethX rx value tx value
    • How to monitor: ethtool -S ethX | grep -e "err" -e "drop" -e "over" -e "miss" -e "timeout" -e "reset" -e "restar" -e "collis" -e "over" | grep -v "\: 0"

Interrupt Coalescence (IC) - rx-usecs, tx-usecs, rx-frames, tx-frames (hardware IRQ)

  • What - number of microseconds/frames to wait before raising a hardIRQ, from the NIC perspective it'll DMA data packets until this timeout/number of frames
  • Why - reduce CPUs usage, hard IRQ, might increase throughput at cost of latency.
  • How:
    • Check command: ethtool -c ethX
    • Change command: ethtool -C ethX rx-usecs value tx-usecs value
    • How to monitor: cat /proc/interrupts

Interrupt Coalescing (soft IRQ) and Ingress QDisc

  • What - maximum number of microseconds in one NAPI polling cycle. Polling will exit when either netdev_budget_usecs have elapsed during the poll cycle or the number of packets processed reaches netdev_budget.
  • Why - instead of reacting to tons of softIRQ, the driver keeps polling data; keep an eye on dropped (# of packets that were dropped because netdev_max_backlog was exceeded) and squeezed (# of times ksoftirq ran out of netdev_budget or time slice with work remaining).
  • How:
    • Check command: sysctl net.core.netdev_budget_usecs
    • Change command: sysctl -w net.core.netdev_budget_usecs value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - netdev_budget is the maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are registered to polling are probed in a round-robin manner. Also, a polling cycle may not exceed netdev_budget_usecs microseconds, even if netdev_budget has not been exhausted.
  • How:
    • Check command: sysctl net.core.netdev_budget
    • Change command: sysctl -w net.core.netdev_budget value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - dev_weight is the maximum number of packets that kernel can handle on a NAPI interrupt, it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware aggregated packet is counted as one packet in this.
  • How:
    • Check command: sysctl net.core.dev_weight
    • Change command: sysctl -w net.core.dev_weight value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - netdev_max_backlog is the maximum number of packets, queued on the INPUT side (the ingress qdisc), when the interface receives packets faster than kernel can process them.
  • How:
    • Check command: sysctl net.core.netdev_max_backlog
    • Change command: sysctl -w net.core.netdev_max_backlog value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool

Egress QDisc - txqueuelen and default_qdisc

  • What - txqueuelen is the maximum number of packets, queued on the OUTPUT side.
  • Why - a buffer/queue to face connection burst and also to apply tc (traffic control).
  • How:
    • Check command: ifconfig ethX
    • Change command: ifconfig ethX txqueuelen value
    • How to monitor: ip -s link
  • What - default_qdisc is the default queuing discipline to use for network devices.
  • Why - each application has different load and need to traffic control and it is used also to fight against bufferbloat
  • How:
    • Check command: sysctl net.core.default_qdisc
    • Change command: sysctl -w net.core.default_qdisc value
    • How to monitor: tc -s qdisc ls dev ethX

TCP Read and Write Buffers/Queues

The policy that defines what is memory pressure is specified at tcp_mem and tcp_moderate_rcvbuf.

  • What - tcp_rmem - min (size used under memory pressure), default (initial size), max (maximum size) - size of receive buffer used by TCP sockets.
  • Why - the application buffer/queue to the write/send data, understand its consequences can help a lot.
  • How:
    • Check command: sysctl net.ipv4.tcp_rmem
    • Change command: sysctl -w net.ipv4.tcp_rmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc)
    • How to monitor: cat /proc/net/sockstat
  • What - tcp_wmem - min (size used under memory pressure), default (initial size), max (maximum size) - size of send buffer used by TCP sockets.
  • How:
    • Check command: sysctl net.ipv4.tcp_wmem
    • Change command: sysctl -w net.ipv4.tcp_wmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc)
    • How to monitor: cat /proc/net/sockstat
  • What tcp_moderate_rcvbuf - If set, TCP performs receive buffer auto-tuning, attempting to automatically size the buffer.
  • How:
    • Check command: sysctl net.ipv4.tcp_moderate_rcvbuf
    • Change command: sysctl -w net.ipv4.tcp_moderate_rcvbuf value
    • How to monitor: cat /proc/net/sockstat

Honorable mentions - TCP FSM and congestion algorithm

Accept and SYN Queues are governed by net.core.somaxconn and net.ipv4.tcp_max_syn_backlog. Nowadays net.core.somaxconn caps both queue sizes.

  • sysctl net.core.somaxconn - provides an upper limit on the value of the backlog parameter passed to the listen() function, known in userspace as SOMAXCONN. If you change this value, you should also change your application to a compatible value (i.e. nginx backlog).
  • cat /proc/sys/net/ipv4/tcp_fin_timeout - this specifies the number of seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification but required to prevent denial-of-service attacks.
  • cat /proc/sys/net/ipv4/tcp_available_congestion_control - shows the available congestion control choices that are registered.
  • cat /proc/sys/net/ipv4/tcp_congestion_control - sets the congestion control algorithm to be used for new connections.
  • cat /proc/sys/net/ipv4/tcp_max_syn_backlog - sets the maximum number of queued connection requests which have still not received an acknowledgment from the connecting client; if this number is exceeded, the kernel will begin dropping requests.
  • cat /proc/sys/net/ipv4/tcp_syncookies - enables/disables syn cookies, useful for protecting against syn flood attacks.
  • cat /proc/sys/net/ipv4/tcp_slow_start_after_idle - enables/disables tcp slow start.

How to monitor:

  • netstat -atn | awk '/tcp/ {print $6}' | sort | uniq -c - summary by state
  • ss -neopt state time-wait | wc -l - counters by a specific state: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listening, closing
  • netstat -st - tcp stats summary
  • nstat -a - human-friendly tcp stats summary
  • cat /proc/net/sockstat - summarized socket stats
  • cat /proc/net/tcp - detailed stats, see each field meaning at the kernel docs
  • cat /proc/net/netstat - ListenOverflows and ListenDrops are important fields to keep an eye on
    • cat /proc/net/netstat | awk '(f==0) { i=1; while ( i<=NF) {n[i] = $i; i++ }; f=1; next} \ (f==1){ i=2; while ( i<=NF){ printf "%s = %d\n", n[i], $i; i++}; f=0} ' | grep -v "= 0; a human readable /proc/net/netstat

tcp finite state machine Source: https://commons.wikimedia.org/wiki/File:Tcp_state_diagram_fixed_new.svg

Network tools for testing and monitoring

  • iperf3 - network throughput
  • vegeta - HTTP load testing tool
  • netdata - system for distributed real-time performance and health monitoring

References

More Repositories

1

digital_video_introduction

A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding). Translations: 🇺🇸 🇨🇳 🇯🇵 🇮🇹 🇰🇷 🇷🇺 🇧🇷 🇪🇸
Jupyter Notebook
15,367
star
2

ffmpeg-libav-tutorial

FFmpeg libav tutorial - learn how media works from basic to transmuxing, transcoding and more. Translations: 🇺🇸 🇨🇳 🇰🇷 🇪🇸 🇻🇳 🇧🇷
C
9,873
star
3

cdn-up-and-running

CDN Up and Running - Building a CDN from Scratch to Learn about CDN, Nginx, Lua, Prometheus, Grafana, Load balancing, and Containers.
Lua
3,199
star
4

redlock-rb

Redlock is a redis-based distributed lock implementation in Ruby. More than 20M downloads.
Ruby
684
star
5

live-stream-from-desktop

Provide guidance to test live streaming (mpeg-dash or hls) or vod from your desktop
Shell
177
star
6

nginx-lua-redis-rate-measuring

A lua library to provide distributed rate measurement using nginx + redis, you can use it to do a throttling system within many nodes.
Lua
150
star
7

http-video-streaming-troubleshooting

A collection of fixes / problem solutions to HTTP video streaming
77
star
8

nott

The New OTT Platform - an excuse to discuss and design a simple edge computing platform
Lua
50
star
9

video-containers-debugging-tools

A set of command lines to debug video streaming files like mp4 (MPEG-4 Part 14), ts (MPEG-2 Part 1), fmp4 in Dash, HLS, or MSS, with or without DRM.
44
star
10

python_chip16

A full implementation (tested) of chip16 virtual machine, or emulator as you wish, using python and rendering with opengl.
Python
35
star
11

scte-35-scte-104-scte-67

Documentation/references about Dynamic Ad Insertion (DAI) through SCTE-104, SCTE-35 to HLS, MPEG DASH, Smooth, RTMP
31
star
12

tls_certificate_generation

Use temporary Amazon EC2 / Digital Ocean cloud machines to get / renew letsencrypt certificates
Shell
28
star
13

lua-resty-dynacode

A library to provide dynamic (via json/API) load of lua byte code into nginx/openresty.
Ruby
27
star
14

docker-ffmpeg-vmaf

Docker FFmpeg VMAF usage example / tips / workflow
25
star
15

edge-computing-resty

a simple edge computing platform using nginx, lua and rails
Ruby
24
star
16

resty-bakery

An Nginx+Lua library to modify media manifests like HLS and MPEG Dash, acting like a proxy.
Lua
21
star
17

player-ffmpeg

Up to date tutorial of ffmpeg
C
19
star
18

fake-as-fuck

I'm a lonely, lonely lord. Pick another day to be restored. Just another fate that's going overboard.
Python
19
star
19

lua-resty-perf

A small ngx resty lua library to benchmark memory and throughput of a function.
Lua
16
star
20

kaltura-media-framework-docker-compose

A docker compose for https://github.com/kaltura/media-framework
Go
13
star
21

cassandra-lock

A ruby lib to achieve consensus with Cassandra
Ruby
12
star
22

dotfiles

dotfiles
Shell
8
star
23

JChip16BR

An java implementation of VM Chip16.
Assembly
8
star
24

clappr-pause-tab-visibility

A clappr container plugin to pause when user is in another tab and resume when the user is back.
JavaScript
6
star
25

py-chip8

introduction to low level thingies
Python
6
star
26

tc-linux-rate-delay-specific-ip-on-docker

a docker experiment to learn to apply tc (linux traffic control) to shaping and delaying network for a specific CIDR/network.
Shell
6
star
27

jgb

gameboy emulator written in JavaScript
JavaScript
4
star
28

dcpu16

JDCPU16BR
Java
4
star
29

ruby_meta_programming

Meta programming Ruby - Book practise
Ruby
4
star
30

nginx-throttling-leaderboard

POC to test local throttling and top leaderboard most requested token
Lua
3
star
31

js_learning

My javascript training files
JavaScript
3
star
32

jmf

jmf - java media framework example of server and player
Java
3
star
33

hackday-27-06-19

Globo.com hackday-27-06-19
Lua
2
star
34

clojure_playground

It is a simple repo to study clojure
Clojure
2
star
35

clappr-dash-dashjs

A dash playback (based on dashjs) for 🎬 Clappr
HTML
2
star
36

clint

An ordinary encoder.
JavaScript
2
star
37

clojure-playground

Silly repo -> While I'm reading and learning about clojure I will use this guy to keep track
Clojure
2
star
38

jchip8br

Automatically exported from code.google.com/p/jchip8br
Java
1
star
39

playground.sh

A place to practice the fine art of shell script
Shell
1
star
40

d3.playground

A place to play with d3js
1
star
41

rslimbr

Ruby Slim Protocol for Fitnesse compatible with Ruby 1.9.x
Ruby
1
star
42

minimal-js-dash-player

JavaScript
1
star
43

jschip16br

the chip16 virtual machine specification implemented using javascript
1
star
44

playground.coffe

just another playground for a 'new' language
1
star
45

angularjs-pet

Pet project to learn and apply angularjs for large project with tests and fine structure.
JavaScript
1
star
46

test_legacy

Improve your legacy project providing certain level of test automation
Ruby
1
star
47

mediainfo

docker image for mediainfo
1
star
48

rspec_book

the book in praticse
Ruby
1
star
49

cat

Container as a Teacher
Shell
1
star