Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Short haul PMDs




I am also interested in short haul solution, primarily within Data
centers and enterprises. My main area of interest is for ethernet
based SANs.

Thanks

Rich Taborek wrote:
> 
> Ladies and Gentlemen,
> 
> My best recollection of straw polls by Jonathan, our chair, was that the
> user community was very poorly represented at meetings. Recently, we
> have heard from two members of the Ethernet user community, representing
> very high volume Ethernet equipment users, that short haul PMDs are key.
> 
> I know from others related standards bodies and industry associations
> representing SANs and clustered networks, sometimes referred to as
> System Area Networks, that the percentage of short haul links (<100 m)
> is significantly more than that encountered in LAN environments.
> Throwing typically short haul MAN/WAN access links into the fray, I'm
> having a hard time swallowing both the current HSSG distance/cable plant
> objectives and PMD solution set. This is especially true in light of a
> the last, but not least PAR criteria of Economic Feasibility.
> 
> Ethernet users are demanding low-cost short-haul solutions. A small
> number of well defined and simple candidate solutions have been
> proposed. I suggest going with those solutions. If it takes adding a
> short-haul objective (<100 m) to get off the dime and allow the Task
> Force to further develop PMD solutions for the multimode fiber
> objectives, let's do it! I don't believe that the user community would
> settle for proprietary short haul solutions for well over half of their
> connections.
> 
> It would help a great deal to hear from other users.
> 
> Best Regards,
> Rich
> 
> --
> 
> > "McCormick, Corey" wrote:
> >
> > Our experience is still very similar.  Last weekend we just cut our
> > last refinery from FDDI/Ethernet to GigE switches and again more than
> > half of the GigE ports are <25m and most of those are either 2-4m or
> > 10-15m.  These short runs all used new cables with the correct ends so
> > there would be no couplers or splices.  They are all SC or MT-RJ
> > (spec'd to get port density in the switches).  The longer installed
> > links are 300m - 1Km and all are using 9 year old FDDI-spec fiber on
> > ST connectors.  There are only ~10% LX and the rest are SX.  We would
> > have used copper for the shorter links (cables that stay inside one
> > building and are <90M) had the cables, NICs, switch ports and GBICs
> > all been available at the time of the order/installation.  I do not
> > know about others, but our short runs far outnumber long ones.
> >
> > Also it might be noted that petrochemical refineries are among the
> > larger manufacturing complexes in the world.  They are large very
> > 2-dimensional installations and are usually measured in Km, not feet.
> > While we have some SM applications, 95+% are MM and the majority of
> > the ports will be hosts.  I can only speculate that this will follow
> > as we move to 10G.  Our physical architecture seems to stay about the
> > same regardless of the technology.  Ethernet was replaced by FDDI,
> > which was replaced by ATM or GigE, which I believe will be replaced by
> > 10GE in the same pattern.  As this was our last location to convert,
> > we have now done them all in a similar fashion and they were designed
> > and implemented by three different teams. (admittedly with some
> > cross-pollinated influence)
> >
> > Gates, Kroc, Walton all believe(d) that volume wins and I my
> > experience leaves me not in a position to argue.  Which is better SCSI
> > or IDE/ATAPI?  I believe SCSI to be the more scaleable, manageable and
> > extensible design, but ATAPI wins volume and thus cost by a huge
> > margin.  I do not like the KISS principle sometimes, but it is more
> > often than not the  correct one.  So long as there can be modular
> > ports (a.k.a. GBICs), then the actual port technology matters little
> > to the switch/NIC vendors, but when there can be a low-cost solution
> > integrated everywhere, even with its limitations, I think that will be
> > the most successful.
> >
> > So, to improve the chances for a successful 10G implementation, it
> > seems to me that the short solution needs to be Good, Fast and Cheap.
> > The longer runs are varied in requirements and not as cost sensitive
> > since we need many fewer of them.
> >
> > Just more experience,
> >
> > Corey McCormick
> > CITGO Petroleum
> >
> >  -----Original Message-----
> > From:   Roy Bynum [mailto:rabynum@mindspring.com]
> > Sent:   Tuesday, August 01, 2000 2:00 PM
> > To:     Chris Simoneaux; stds-802-3-hssg@ieee.org
> > Subject:        RE: Equalization and benefits of Parallel Optics.
> >
> > Chris,
> >
> > After a lot of thought from a customer implementation viewpoint, that
> > is conclusion that I have come to.
> >
> > Thank you,
> > Roy Bynum
> >
> > At 04:29 PM 7/31/00 -0600, Chris Simoneaux wrote:
> >
> > >Roy,
> > >Nice piece of info.  It is worthwhile to finally get an installer/end
> > >user perspective of the environment that 10GbE will exist in.  If one
> > >believes your analysis (and I haven't seen any contradictions), then
> > >it would seem quite reasonable to expect a PMD objective which covers
> > >the 2~20m space.....i.e 66% of the initial market.
> > >
> > >Would you agree?
> > >
> > >Regards,
> > >Chris
> 
> -------------------------------------------------------
> Richard Taborek Sr.                 Phone: 408-845-6102
> Chief Technology Officer             Cell: 408-832-3957
> nSerial Corporation                   Fax: 408-845-6114
> 2500-5 Augustine Dr.        mailto:rtaborek@nSerial.com
> Santa Clara, CA 95054            http://www.nSerial.com

-- 

Satish Mali
   
   http://www.TrainingCity.com     -> Train to success