nanog mailing list archives
Re:
From: "Craig A. Haney" <craig () seamless kludge net>
Date: Tue, 9 Jan 2001 08:00:29 -0500
At 03:32 -0800 2001/01/09, Vadim Antonov wrote:
You mean you really have any other option when you want to interconnect few 300 Gbps backbones? :) Both mentioned boxes are in 120Gbps range fabric capacity-wise. If you think that's enough, i'd like to point out at the DSL deployment rate. Basing exchange points at something which is already inadequate is a horrific mistake, IMHO. Exchange points are major choke points, given that 80% or so of traffic crosses an IXP or bilaterial private interconnection. Despite the obvious advantages of the shared IXPs, the private interconnects between large backbones were a forced solution, purely for capacity reasons. --vadim
exchange points being choke points are more complex than that:- backbones direct interconnect because it makes what was public traffic stats now private. also is more financially sound model than a 3rd party being involved. it minimize expenses.
- backbones limiting bandwidth into an Exchange Point also makes it a choke point.
- pulling out of an Exchange or demoting it's importance to a particular backbone means a justification for not having equitable peering.
- knowing so much traffic goes between backbones makes it a political tug of war that brought on direct interconnects.
- private interconnects were not a forced solution. they were for revenue and political, not purely for capacity reasons. there has been this notion of Tier 1,2,3 ... because of this.
- equitable financial return at an Exchange. means turning smaller peers into customers.
i am sure i have not nearly covered everything here. -craig
On Mon, 8 Jan 2001, Daniel L. Golding wrote:There are a number of boxes that can do this, or are in beta. It would be a horrific mistake to base an exchange point of any size around one of them. Talk about difficulty troubleshooting, not to mention managing the exchange point. Get a Foundry BigIron 4000 or a Riverstone SSR. Exchange point in a box, so to say. The Riverstone can support the inverse-mux application nicely, on it's own, as can a Foundry, when combined with a Tiara box. Daniel Golding NetRail,Inc. "Better to light a candle than to curse the darkness" On Mon, 8 Jan 2001, Vadim Antonov wrote: > There's another option for IXP architecture, virtual routers over a > scalable fabric. This is the only approach which combines capacity of > inverse-multiplexed parallel L1 point-to-point links and flexibility of > L2/L3 shared-media IXPs. The box which can do that is in field trials > (though i'm not sure the current release of software supports that > functionality). > > --vadim
Current thread:
- Re:, (continued)
- Re: Randy Bush (Feb 24)
- Re: Dave Curado (Feb 24)
- Re: Masataka Ohta (Feb 24)
- Re: Vadim Antonov (Feb 24)
- Re: Daniel L. Golding (Feb 24)
- Re: Vadim Antonov (Feb 24)
- Re: Craig A. Haney (Feb 24)
- Re: Daniel L. Golding (Feb 24)
- RE: Barry Raveendran Greene (Feb 24)
- RE: Randy Bush (Feb 24)
- Re: Jeff Haas (Feb 24)
