Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [EFM] Necessity of DBA mechanisms ...




Roy

Yes I do agree that the data loss and the latency variations of the general
purpose internet make it a totally unsuitable medium for commercial circuit
emulation services. It's not that great for end-user to end-user VoIP either
:-).

My thinking is more along the lines of circuit emulation in tightly
controlled environments, i.e. Ethernet / IP metro access networks owned and
operated by the carrier, where the carrier has total control over the
infrastructure.

Personally I think T1 and E1 PBXs and C5 switches will still be around until
the end of this decade a voice delivery mechanisms. The half life is
probably about seven years. I'll send you some other thoughts directly,
rather than via the exploder.

Bob

-----Original Message-----
From: Roy Bynum [mailto:rabynum@xxxxxxxxxxxxxx]
Sent: 20 July 2001 15:26
To: bob.barrett@xxxxxxxxxxxxxxx
Cc: stds-802-3-efm@ieee.org
Subject: RE: [EFM] Necessity of DBA mechanisms ...


Bob,

I do not believe that it is operationally feasible to maintain an end to
end  data service has to be end to end engineered and constrained in order
to support circuit emulation.  If I have read your memo correctly, I don't
think that you believe that it is feasible either.

That leaves the vary large buffer scenario.  This concept has been around
for a long time.  I have seen several vendors attempt it with varying
success.  Invariably it is a very expensive solution for any technology.

The problem with the Internet is not only latency variance, but also data
loss.  At present, the best that I have seen is a restricted service
network.  That network has a data loss of about 10e-3 (0.001) packets per
second.  That translates to a bit error rate of about 10e-4 or 10e-5 error
bits per second.  That would be considered a very unreliable circuit and
not able to support a non-IP data service SLA.  At present the Internet in
general has a data loss of about 1% to 5%.  A company that is willing to
put their data over a service provider circuit that is so un-reliable would
be looking for a very low cost of service.  The low price of the service
would make the margins for such a service too low to support the additional
equipment and operations to support the service.

I don't believe that the Internet will ever be able to properly support
circuit emulation cost effectively regardless of the technology.  As it is,
the cost of best effort, Internet, services is being subsidized by the
current low performance applications that ride on it.  Attempts to improve
the latency variance of the Internet are aimed at putting applications that
require better data communications performance, so that higher margin
applications can be supported.

Picoseconds per bit, on a bit for bit alignment, is the measurement bounds
placed on the variance of a multiplexed TDM circuit that you are trying to
emulate.   The reliability of 10e-8 (0.00000001) to 10e-10 (0.0000000001)
bit error rate will be even harder to emulate.  These services will need to
continue to maintain the high stability and reliability that gives the
service provers the ability to provide the SLAs that make this a high
margin service.

With the cost of high density TDM switch matrixes falling, the cost true
point to point dedicated circuit services are going to come down over the
next few years.  This will be even more true as more buildings and homes
are provided fiber access as ILEC by-pass.  A major portion, if not most of
the current cost to customer of a TDM service is in the cost of ILEC
facilities charges, not the cost of the service itself.

With EFM, even the cost of the service termination and customer premise
equipment will drop.  This cost reduction will, I hope, apply to both point
to point dedicated bandwidth services as well as the lower margin shared
bandwidth services that provide application services.  If the EFM group
does not mess with the PHY too much, then the inherent reliability that
Ethernet currently has will help provide an infrastructure to support those
higher end applications.  Service providers should also be able to provide
a dedicated point to point bandwidth service similar to or plugged into an
enhanced private line type of service with only the Ethernet end links
adding latency variance.

Thank you,
Roy Bynum



At 12:10 AM 7/20/01 +0100, Bob Barrett wrote:
>Hi Roy,
>
>Yes, I understand latency variances versus latency. Still not sure where
>pico seconds come in. That's 10e-12 isn't it? Latency variations are
>something we are having to deal with when implementing circuit emulation
>even at layer two, and layer three to a degree. The only way we have found
>of doing this reliably is to design out the uncertainties in the
>infrastructure to minimise the latency variation. The only alternative is
>big buffers. With 193 or 256 bits per 125uS, and latencies of up to 100ms,
>that can be a really large buffer. The ordinary latency has to increase to
>the worst (longest latecy) case in order to maintain frame order. We have
>found buffering four to eight frames works if layer two switching is used
>rather than IP, but that's only good in a totally controlled end-to-end
>environment between the subscriber and the POP/CO (or point where the
>circuit is recovered).
>
>Hopefully MPLS will help in the IP enviroment, once the end to end IP
>network can support this. I think that is 12-18 months way. What is your
>view on this time scale?
>
>Bob
>
>-----Original Message-----
>From: Roy Bynum [mailto:rabynum@xxxxxxxxxxxxxx]
>Sent: 18 July 2001 16:19
>To: bob.barrett@xxxxxxxxxxxxxxx
>Subject: RE: [EFM] Necessity of DBA mechanisms ...
>
>
>Bob,
>
>Please re-read my e-mail. I said "latency variance", not "latency". There
>is a very big difference between these two things. "Latency" is the overall
>delay that data frames/packets/cells are subject to when they traverse any
>system or infrastructure. "Latency variance" is the difference in the
>actual individual per packet latency that each packet has received going
>through the system or infrastructure, when compared to other packets going
>through the system or infrastructure. The equivalent to "latency variance",
>or "data jitter" is "bit jitter", where the leading and trailing edges of
>"bits" blur when seen on a scope. Latency variance has to compensated for
>by applications and upper layer protocols and becomes a perceived latency
>that greatly effects the performance of applications. More often than not
>the problem with poor perceived performance of VoIP applications is
>actually because of the latency variance caused by the IP infrastructure
>than the bandwidth, lack of QOS, or direct end to end latency.
>
>Thank you,
>Roy Bynum
>
>
>At 03:53 PM 7/18/01 +0100, Bob Barrett wrote:
> >Hi Roy,
> >
> >Pico seconds latency, are you sure about that? Pico seconds phase
>stability,
> >yes, at stratum one. Four to eight frame times end to end for a voice
call
> >is more like it, and that's just the framer and switch latency i.e. 2ms
> >(8x125us).
> >
> >Even on a point to point T1/E1 between LIUs it's only running at 2Mbit/s
or
> >1.544. It would be a short link to take an edge only a pico second to
make
> >the journey wouldn't it?
> >
> >Bob
> >
> >Not via the exploder.