mailing list archives
Re: jumbo frames
From: Wayne Bouchard <web () typo org>
Date: Wed, 30 May 2001 14:41:29 -0700
On Wed, May 30, 2001 at 03:15:20PM -0400, Richard A. Steenbergen wrote:
On Tue, 29 May 2001, Dave Siegel wrote:
I've seen a lot of discussion about why one would want to do Jumbo
frames on your backbone...let's assume for the sake of argument that a
customer requirement is to support 9000 bytes packets across your
backbone, without fragmentation.
Why not bump MTU up to 9000 on your backbone interfaces (assuming they
What negative affects might this have on your network?
a) performance delivering average packet sizes
c) buffer/pkt memory utilization
d) other areas
Theoretically increasing the MTU anywhere you are not actually generating
packets should have no impact except to prevent unnecessary fragmentation.
But then again, theoretically IOS shouldn't get buggier with each release.
Well, the way it oughta work is that the backbone uses the same MTU as
that of the largest MTU of your endpoints. So, for example, you have a
buncha hosts on a fddi ring running at 4470, you want to make sure
those frames don't have to get fragmented inside your network. Idealy,
all hosts have the same MTU and no one has to worry about that, but in
practice, it seems to be better to push the fragmentation as close to
the end user as possible. (That is, if a user on a 1500MTU link makes
a request to a host on a 4470 link, the response is 4470 up until the
user's end network.) Of course, path MTU discovery makes this a moot
point. The conversation will be held in 1500 byte fragments.
That brings up another interesting question... Everything in the
backbone these days thats DS3 and up talks at 4470. But ethernet
(gig-e, etc), T1s, and dial lines still talk at 1500. I wonder if
there are any paths that exist at 4470 all the way through. (probably,
but probably rare.)
What I've said for some time now is that I would like to see hosts
abandon the 1500 byte MTU and move to something larger in the
interests of efficiency (prefferably 4470 and multiples thereof so we
can actually establish a "rule of thumb" for larger MTU sizes.) Its
not much, I grant you, but with increasingly higher bandwidths
availible to the average user, every little bit helps.
There will obviously be different packet handling techniques for the
larger packets, and I'm not aware of any performance or stability testing
that has been done for jumbo frames. I'm guessing the people who are
actively using them havn't been putting it in line rate mixed small and
large packets conditions.
Well, the problem with buffering 9k packets is that it doesn't take
many of them to bloat a queue. If you're talking links that pass tens
of thousands of packets per second, if you want to have 0.25 seconds
of buffer space it takes a lot of memory.
Obviously anything extra and uncommon you try to do runs the risk of
setting off new bugs (even common stuff sets off new bugs). I can tell you
some of the drivers I have seen for PC gige cards (especially linux) badly
mangle jumbo frames and may not perform well.
"tag-switching mtu 1518"?