mirror of
https://github.com/xcat2/xNBA.git
synced 2024-11-22 09:31:51 +00:00
[netdevice] Process all received packets in net_poll()
The current logic is to process at most one received packet per call to net_poll(), on the basis that refilling the hardware descriptor ring should be delayed as little as possible. However, this limits the rate at which packets can be processed and ultimately ends up adding latency which, in turn, limits the achievable throughput. With temporary modifications in place to essentially remove all resource constraints (heap size increased to 16MB, RX descriptor ring increased to 64 descriptors) and a TCP window size of 1MB, the throughput on a gigabit (i.e. 119MBps) network can be observed to fall off exponentially from around 115MBps to around 75MBps. Changing net_poll() to process all received packets results in a steady 119MBps throughput. Signed-off-by: Michael Brown <mcb30@ipxe.org>
This commit is contained in:
parent
f3d197a529
commit
b0e236a9ee
@ -750,13 +750,8 @@ void net_poll ( void ) {
|
||||
if ( netdev_rx_frozen ( netdev ) )
|
||||
continue;
|
||||
|
||||
/* Process at most one received packet. Give priority
|
||||
* to getting packets out of the NIC over processing
|
||||
* the received packets, because we advertise a window
|
||||
* that assumes that we can receive packets from the
|
||||
* NIC faster than they arrive.
|
||||
*/
|
||||
if ( ( iobuf = netdev_rx_dequeue ( netdev ) ) ) {
|
||||
/* Process all received packets */
|
||||
while ( ( iobuf = netdev_rx_dequeue ( netdev ) ) ) {
|
||||
|
||||
DBGC2 ( netdev, "NETDEV %s processing %p (%p+%zx)\n",
|
||||
netdev->name, iobuf, iobuf->data,
|
||||
|
Loading…
Reference in New Issue
Block a user