mailing list archives
From: Jimmy Hess <mysidia () gmail com>
Date: Fri, 14 Jun 2013 19:12:59 -0500
On 6/14/13, Scott Helms <khelms () zcorum com> wrote:
Really? In a completely controlled network then yes, but not in a
production system. There is far too much random noise and actual latency
for that to be feasible.
I think you might be applying an oversimplified assumption the
situation. Noise limits the capacity of a channel, and increases
the number of gyrations required to encode a bit, so that it can be
received without error.
The degree of 'random noise', 'actual latency variation', and
'natural packet ordering' can be estimated, to identify the noise.
Even with noise, you can figure out, that the average value which
the errors were centered around increased by 5ms or 10ms, when a
sequence of packets with certain sizes, certain checksum values, and
certain ephemeral ports were processed in a certain sequence,
after a sufficient number of repetitions.