Interesting People mailing list archives

Re: worth reading -- QOS Author is Motorola, Chief Software Architect


From: David Farber <dave () farber net>
Date: Tue, 24 Jun 2008 13:26:11 -0700


________________________________________
From: Christian Huitema [huitema () windows microsoft com]
Sent: Tuesday, June 24, 2008 4:12 PM
To: David Farber
Cc: Tony Lauck
Subject: RE: [IP] Re:   worth reading --  QOS  Author is Motorola, Chief Software Architect

I think Tony Lauck hits the nail on the head.

From an application point of view, a network can be in three states. The network resource can be such that the 
application is expected to work, and generally works without any particular kind of provisions. The network resource 
can be such that the application is not expected to work, no matter what level of management is applied. And in the 
third state, the application may or may not work, and actually might work if network managers tried hard to push other 
applications aside.

Streaming video from a web site is an example of the application that is expected to work, if you bought a "broadband" 
connection. On the other hand, full duplex peer-to-peer high definition video calls are not expected to work. 
Peer-to-peer "camcorder" quality video calls are "almost" expected to work, and thus fall in the third category, the 
category in which QOS might perhaps be useful, albeit only for a very limited time.

Tony rightfully points the relation between expectations and Moore's law. The three categories will remain over time, 
but the example applications will change. Just ten years ago, we would not have expected video streaming to almost 
always work. Ten years from now, there will be enough bandwidth for HD video calls to work almost all the time, using 
the default "best effort" service. Ten years from now, there will still be applications that are not expected to work, 
maybe something like high definition holographic transmission. But if we wait another few years, these applications too 
will become routine, and others will emerge as the new frontier.

The key requirement of sane policies is to not closeout the future. Computers and networks are expected to continuously 
improve. Applications that require special provisioning today are expected to become routine tomorrow. Let's not assume 
that these applications have a magic relation with the mythical QoS monster.

-----Original Message-----
From: David Farber [mailto:dave () farber net]
Sent: Tuesday, June 24, 2008 12:18 PM
To: ip
Subject: [IP] Re: worth reading -- QOS Author is Motorola, Chief
Software Architect


________________________________________
From: Tony Lauck [tlauck () madriver com]
Sent: Tuesday, June 24, 2008 2:19 PM
To: David Farber
Subject: Re: [IP] worth reading --  QOS  Author is Motorola, Chief
Software Architect

If we take out of the argument questions (or fears) of price
discrimination or other monopolistic practices, it becomes a question of
engineering. Is it better to provide adequate service by special
handling of certain "demanding" types of packets or is it better to
provide adequate service by resource sizing with simple but fair
scheduling policies?  There is a trade off between devoting resources to
developing and implementing QoS mechanisms throughout the network and
host computer stacks vs. devoting resources to provisioning additional
capacity.

There is approximately one half century's worth of history. This is a
debate that started with Bell Lab's circuits vs. MIT's packets. This is
token ring vs. Ethernet and later FDDI vs. Fast Ethernet. This is ATM
vs. IP. The argument has been nearly continuous throughout the history
of computer networking.

Historically, the approaches that succeeded in the marketplace have been
the simple ones that could more easily ride the technology curve driven
by Moore's law. There are always people trying to make a career or
business out of new complexities, and occasionally some of them succeed.
  Perhaps now is the time for more complex systems, but I doubt it. I
have seen simple approaches win out too many times. There has always
been a reason why the complex approaches lost, and these reasons have
many: cost, performance, reliability, time to market, compatibility,
ease of use, etc. Others, who fancy themselves masters of complexity may
have a different opinion.

Dave Crocker has made one specific claim regarding transient contention
being the difference between "almost never" and "never". This is not an
meaningful distinction in the real world, because real systems fail. One
is always working with probabilities. QoS services with real-time
guarantees require redundancy coupled with real-time fail over, and this
has historically been achieved only with high levels of redundancy. High
degrees of "almost never" can be achieved at high cost. "Never" comes at
infinite cost.

[Along this line, my advocating of KISS is not intended as an argument
for government mandated network neutrality. There is nothing less simple
in today's world than Government.]

Tony Lauck
www.aglauck.com



David Farber wrote:
________________________________________
From: Dave Crocker [dhc2 () dcrocker net]
Sent: Tuesday, June 24, 2008 11:07 AM
To: David Farber
Cc: ip; Waclawsky John-A52165
Subject: Re: [IP] QOS  Author is Motorola, Chief Software Architect

David Farber wrote:
From: Waclawsky John-A52165 [jgw () motorola com] Sent: Monday, June 23,
2008
1:08 AM To: David Farber Subject: RE: [IP] Re:   Net Neutrality: A
Radical
Form of Non-Discrimination by Hal Singer

Hi Dave, Some QoS perspectives that I have learned: First, the main
problem.
QoS really isn't needed when you have big pipes.


This view has gained popularity in recent years and it seems to be
based on two
misunderstandings.  The first is that end-to-end performance is
dictated by the
size of pipes and the second is that pipes are always large or that we
can
guarantee that eventually they all will be large.

Packet switching is more about the switching than the pipes.  The path
from a
one random end-system to another has quite a few switching points.
This thing
called queuing comes into play when there is transient contention for
resources.
This includes contention for use of each pipe along the way, but also
contention
in switches and, by the way, contention in either of the end-systems.
(I'm
qualifying with "transient" because sustained contention means that
the system
is fundamentally overloaded; queuing can't help there.)

The premise behind the "big pipes" view is that we don't have
transient
contention. It's simply not true.

What is true is that there are common scenarios where transient
contention is
almost never a problem.  But the difference between "almost never" and
"never"
counts for everything in a world seeking reliability.  Especially if
you want to
cover a full range of scenarios.

One set of scenarios left out by "big pipe" devotees is a vast portion
of the
world with limited resources.  While this obviously includes many
remote or
developing environments, it also includes less-capable channels such
as mobile
devices.

It should also be noted that there is a tendency for the core of the
Internet to
have less contention than access networks at the edge.  We can wave
our hands
and say that the edges will eventually catch up, but history suggests
otherwise.

A persistent lesson over the history of packet switching is that there
is a wide
range of resource capabilities and anything designed to rely on high-
end
capabilities disenfranchises participants and systems that are not so
privileged.

"QOS" has indeed had a problematic history over the life of packet-
switching,
but this seems to be because it is difficult to design in a way that
is useful
-- and then deploy it throughout the infrastructure -- rather than
because it
isn't needed.

Basic Internet capabilities were designed to maximize use of the
channels, but
at the cost of inter-packet arrival variance.  Any application needing
to
sustain a specific transmission rate with specific (and low) variance
is at
risk, without some underlying design to ensure the necessary
performance.

Anyone with experience to the contrary might want to review their
sampling
methodology against the full and realistic set of Internet scenarios.


d/
--

   Dave Crocker
   Brandenburg InternetWorking
   bbiw.net



-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com





-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com




-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: