Home page logo
/

nanog logo nanog mailing list archives

Re: Jumbo Frames (was Re: MAE-EAST Moving? from Tysons corner to reston VA. )
From: "Richard A. Steenbergen" <ras () e-gerbil net>
Date: Mon, 19 Jun 2000 05:56:49 -0400 (EDT)


On Mon, 19 June 2000, "Bora Akyol" wrote:

As long as most end users are running Ethernet, Fast Ethernet, DSL or Cable
Modems, what is the point of jumbo frames/packets other than transferring
BGP tables really fast. Did any one look into how many packets are moved
through an OC-48 in 1 seconds. (approx. 6 million 40 byte packets). I think
even without jumbo frames, this bandwidth will saturate most CPUs.

Jumbo frames are pointless until most of the Internet end users switch to a
jumbo frame based media.

Yes, they look cool on the feature list (we support it as well). Yes they
are marginally more efficient than 1500 byte MTUs ( 40/1500 vs 40/9000). But
in reality, 99% or more of the traffic out there is less than 1500 bytes. In
terms of packet counts, last time I looked at one, 50% of the packets were
around 40 byte packets (ACKs) with another 40% or so at approx 576 bytes or
so.

What is the big, clear advantage of supporting jumbo frames?

When 1500 byte frames from the customer's LAN enter the customer's
router and enter some form of IP tunnel, then a core fabric which
supports larger than 1500 byte frames will not cause fragmentation.
It's not necessary to do the full jumbo size frames. I suspect that
supporting two levels of encapsulation will be enough in 99.9% of the
cases. For the sake of argument, what would be the downside of using a
2000 byte MTU as the minimum MTU in your core?

When the next end user upgrade is deployed and everyone has devices which
can support larger MTUs, wouldn't it be a shame if they said "if only the
internet core ran at larger MTUs, we could negiotate higher MTUs and make
everyone happier". Also, it is far more then "marginally more efficient".
For every packet you deal with, there is a great amount of work doing
routing lookups, dealing with memory management, and handling interrupts.
Copying another few bytes of data is easy in comparison.

Since we are asking GigE to act in a server and backbone role, we should
acknowledge that the requirements will be different from the average
end-user ethernet. One of those requirements is that the backbone should
be able to pass larger packets it may encounter without resorting to
fragmentation (which only gets harder and we start getting into higher
speeds).

Aside from that, and the fact that there is nothing harmful in supporting
larger packets through your network, there is the fact that if we want
people to support standards we KNOW are good for them (even if they
don't), we have to actually ask for it. Imagine an internet with a
reliable MTU negiotation mechinism, which can take advantage of improved
thruput, much lower CPU usage, zero copy, page-flipping, DMA transfers,
and all those other lovely things.

These are important for many reasons. Without these techniques, we can't
even do line rate GigE on "common place" servers, let alone have any CPU
left over to do more then just send packets. Its easy to just say "we'll
throw a server farm at it" or "we'll just get a faster processor", but as
higher speed links become more common place, and as GigE becomes common in
servers (when servers can actually use it effectively) and 10GigE becomes
commonplace for backbone links, we'll start to see these things matter.
Why engineer ourselves into a corner of shortsightedness which only gets
harder and harder to fix, because its "easier" to do nothing?

(sorry Michael, just using your msg as a good point to reply :P)

-- 
Richard A Steenbergen <ras () e-gerbil net>   http://www.e-gerbil.net/humble
PGP Key ID: 0x138EA177  (67 29 D7 BC E8 18 3E DA  B2 46 B3 D8 14 36 FE B6)




  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]