![]() ![]() Based on the return value of run_filter the packet can be filtered out or put into the queue. In the next section, we’ll examine the details). Packet_rcv function calls run_filter, which is just the BPF logic part(Currently, you can regard it as a black box. _skb_queue_tail(&sk->sk_receive_queue, skb) // put the packet into the queue Res = run_filter(skb, sk, snaplen) // filter logic Static int packet_rcv ( struct sk_buff *skb, struct net_device *dev, struct packet_type *pt, struct net_device *orig_dev) * hook function packet_rcv is triggered, when the packet is received */ The following code block shows the hook function packet_rcv: 1 The hook function executes when the packet is received. In lines 11 and 14, it attaches the hook function to the socket. Packet_create function handles the socket creation when the application calls the socket system call. Po->prot_hook.func = packet_rcv_spkt // attach hook function to socket Po->prot_hook.func = packet_rcv // attach hook function to socket Static int packet_create ( struct net *net, struct socket *sock, int protocol) * source code file of net/packet/af_packet.c */ Let’s verify this behavior by examining the kernel source code as follows ( Note the kernel code shown in this article is based on version 2.6, which contains the cBPF implementation.): 1 The filter should be triggered immediately when a packet is received at the network interface.As the original BPF paper said To minimize memory traffic, the major bottleneck in most modern system, the packet should be filtered ‘in place’ (e.g., where the network interface DMA engine put it) rather than copied to some other kernel buffer before filtering. Since copying a large amount of data from kernel space to the user space produces a huge overhead, which can influence the system performance a lot. The best solution to this question is to put the filter as early as possible in the path. The last article examines the path of a received packet as follows: The first question to answer is where should we place the filter. eBPF is the hottest technology in today’s software world, and I’ll talk about it in the future. Note that what we examine in this article is cBPF, and eBPF is not inside the scope of this article. Nowadays, the extended BPF is called eBPF, and the original and obsolete version is renamed to classic BPF ( cBPF). But in 2013, BPF was widely extended, and it can be used for non-networking purposes such as performance analysis and troubleshooting. Originally, BPF was designed as a network packet filter. We’ll examine the detailed theory and design of BPF in the following sections. Note that LSF and BPF have some distinct differences, but in the Linux context, when we speak of BPF or LSF, we mean the same packet filtering mechanism in the Linux kernel. In 1997, Linux Socket Filter(LSF) was developed based on BPF and introduced in Linux kernel version 2.1.75. The proposal of BPF was from researchers in Lawrence Berkeley Laboratory, who also developed the libpcap and tcpdump. In 1992, BPF was first introduced to the BSD Unix system for filtering unwanted network packets. So let us examine those concepts along the timeline: It turns out that BPF keeps evolving, and there are several associated concepts such as BPF cBPF eBPF and LSF. Search BPF as the keyword online, and the result is very confusing. Background of BPFīerkeley Packet Filter(BPF) is the essential underlying technology for packet capture in Unix-like operating systems. In this article, let’s continue to explore how to do that. For instance, the sniffer can only capture the TCP segment(and skip the UPD), or it can only capture the packets from a specific source IP address. But a powerful network sniffer like tcpdump should provide the packet filtering functionality. The sniffer developed in the last article captures all the network packets. In the previous article, we examined how to develop a network sniffer with PF_SOCKET socket in Linux platform.
0 Comments
Leave a Reply. |