Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: Equalization and benefits of Parallel Optics.



Title: RE: Equalization and benefits of Parallel Optics.

Our experience is still very similar.  Last weekend we just cut our last refinery from FDDI/Ethernet to GigE switches and again more than half of the GigE ports are <25m and most of those are either 2-4m or 10-15m.  These short runs all used new cables with the correct ends so there would be no couplers or splices.  They are all SC or MT-RJ (spec'd to get port density in the switches).  The longer installed links are 300m - 1Km and all are using 9 year old FDDI-spec fiber on ST connectors.  There are only ~10% LX and the rest are SX.  We would have used copper for the shorter links (cables that stay inside one building and are <90M) had the cables, NICs, switch ports and GBICs all been available at the time of the order/installation.  I do not know about others, but our short runs far outnumber long ones.

Also it might be noted that petrochemical refineries are among the larger manufacturing complexes in the world.  They are large very 2-dimensional installations and are usually measured in Km, not feet.  While we have some SM applications, 95+% are MM and the majority of the ports will be hosts.  I can only speculate that this will follow as we move to 10G.  Our physical architecture seems to stay about the same regardless of the technology.  Ethernet was replaced by FDDI, which was replaced by ATM or GigE, which I believe will be replaced by 10GE in the same pattern.  As this was our last location to convert, we have now done them all in a similar fashion and they were designed and implemented by three different teams. (admittedly with some cross-pollinated influence)

Gates, Kroc, Walton all believe(d) that volume wins and I my experience leaves me not in a position to argue.  Which is better SCSI or IDE/ATAPI?  I believe SCSI to be the more scaleable, manageable and extensible design, but ATAPI wins volume and thus cost by a huge margin.  I do not like the KISS principle sometimes, but it is more often than not the  correct one.  So long as there can be modular ports (a.k.a. GBICs), then the actual port technology matters little to the switch/NIC vendors, but when there can be a low-cost solution integrated everywhere, even with its limitations, I think that will be the most successful.

So, to improve the chances for a successful 10G implementation, it seems to me that the short solution needs to be Good, Fast and Cheap.  The longer runs are varied in requirements and not as cost sensitive since we need many fewer of them.

Just more experience,

Corey McCormick
CITGO Petroleum

 -----Original Message-----
From:   Roy Bynum [mailto:rabynum@xxxxxxxxxxxxxx]
Sent:   Tuesday, August 01, 2000 2:00 PM
To:     Chris Simoneaux; stds-802-3-hssg@xxxxxxxx
Subject:        RE: Equalization and benefits of Parallel Optics.


Chris,

After a lot of thought from a customer implementation viewpoint, that is
conclusion that I have come to.

Thank you,
Roy Bynum

At 04:29 PM 7/31/00 -0600, Chris Simoneaux wrote:

>Roy,
>Nice piece of info.  It is worthwhile to finally get an installer/end user
>perspective of the environment that 10GbE will exist in.  If one believes
>your analysis (and I haven't seen any contradictions), then it would seem
>quite reasonable to expect a PMD objective which covers the 2~20m
>space.....i.e 66% of the initial market.
>
>Would you agree?
>
>Regards,
>Chris
>
>-----Original Message-----
>From: Roy Bynum [mailto:rabynum@xxxxxxxxxxxxxx]
>Sent: Monday, July 31, 2000 10:01 AM
>To: Chris Diminico; stds-802-3-hssg@xxxxxxxx
>Subject: Re: Equalization and benefits of Parallel Optics.
>
>
>
>Chris,
>
>You had sent me a request for information similar to this.  I have not been
>very busy with other things so could not respond properly.  Hopefully this
>will also help and add weight to our, the customers, concerns.
>
>I had a meeting with a major technology consumer last week.  I will not
>state any names, but you can guess who.  They were very interested in 10GbE
>early on as part of their data facility overbuild plans.  It is in this
>context that I want to make these comments.
>
>As part of my role in the design and implementation of advanced
>architecture data networks, I have been involved with the design and
>implementation of data facilities for about ten years.  This e-mail is a
>simple overview of how these are designed and implemented.
>
>Large data facilities take a long time to plan and build.  As such the
>initial design and implementation is based on existing mature
>technology.  The initial construction within the data facility is grouped
>in a common area, generally one end of room. If the routers are located in
>the same room as the servers, they will generally be along a wall in the
>data room.  The servers and data storage systems are put in the room next
>to the area where the routers were installed.  Data switches which
>aggregate the traffic and manage the server traffic flow are sometimes
>located with the routers, and sometimes located with the servers.  Where
>there is planed router growth, the switches are installed adjacent to the
>servers.  From time to time, the routers are located in a different room,
>with longer reach connections between the aggregation data switches and the
>routers.
>
>In most cases, the rows of equipment are at most about 20 equipment
>racks/cabinets long.  For 24in racks that is about 40 feet (12.2m), for
>19in racks that is about 32 feet (9.75m).  Most of the time data switches
>will be in the same row as the servers, to reduce the amount of cable trays
>and cable handling between rows.  Often the aggregation data switches will
>be in the middle of the row to reduce the distance of the interconnect
>cabling.  The row to row distance is about 8 feet (2.5m).  Even with the
>riser from the bottom of one rack/cabinet at one end of the row to the
>bottom of the rack/cabinet at the other end of adjacent rows, the
>interconnections are less than 20m.
>
>For new technology overbuilds of existing data rooms, the new technology
>systems are grouped together in a different area of the data room than the
>mature technology.  The data switches to support the new technology systems
>are co-located in the same row with those systems.  In these situation, the
>vast majority of the new technology interconnections are within the row of
>the new technology overbuild, less than 20m.  By some estimates, data rooms
>designed specifically around 10GbE will be at least two years away.  Given
>that the initial deployment of 10GbE will be in new technology overbuilds
>of these data rooms, it is very important that the ability to understand
>and use the same construction techniques and technologies, such as the type
>of fiber and fiber management systems.
>
>It is a personal estimation on my part that the high capacity data switches
>will be at about 500+ Gb aggregate bandwidth per bay/cabinet by about
>2002.  As such, they will handle a total of 50 10GbE links.  With a limit
>of 19 racks for servers, even at single non-redundant 10Gb link each that
>is 19 links.  For servers with redundant links that is 38 ports, or about
>380Gb aggregate bandwidth which would exceed the ability of the data switch
>interconnect with any outside communications systems.  In the case of
>exceeding the aggregate bandwidth of any one switch, multiple switches are
>interconnected.  These switches could be located next to each other or, as
>is more likely, at equal distances long the row of servers. As more and
>more and more servers come on line, the number of supporting data switches
>increases along with the interconnections between the data switches.   In
>this situation, the implementation of the interconnections will be about
>1/3 (33%) of the data switch ports will be connected to the supported
>servers/storage systems; 1/3 (33%) of the data switch ports will be
>interconnections between the aggregation data switches; and 1/3 (33%) of
>the ports on the aggregation data switches will be to outside
>communications systems.  From this simple model it is easy to see that
>potentially 66% of the initial 10GbE links will be less than 20m.
>
>Thank you,
>Roy Bynum
>
>At 05:45 PM 7/28/00 -0400, Chris Diminico wrote:
> >
> >Corey,
> >
> >A personal thanks for the invaluable customer input. I believe that if we
> >had more customers
> >coming forward with their detailed requirements it would help break the
> >current stalemate in the PMD
> >selections. This is the type of debate that I hoped to stimulate in
> >proposing that we should
> >re-address the objectives of the PMD s; we need to clearly resolve any
> >ambiguity in the
> >objective statements in regards to application-space media-distances and
> >the usage of the word
> >"installed" to represent MMF fiber performance.
> >
> >As a supplier of Internet infrastructure product for Ethernet customer
> >applications,
> >I hear requests such as yours each day. I ll paraphrase in bullets here,
> >borrowing from your e-mail.
> >
> >----My reason for wanting MMF (a 10G interface over MMF) is primarily cost
> >simplicity, and
> >     compatibility with my current applications (technology and distances).
> >----Cost - overall cost for the total installation.
> >+++Labor LAN installers familiar with multimode terminations produce
> >higher yields per unit time
> >                  versus single mode.
> >+++Materials: Connectors, tools, patch cables, test equipment, Laser/LED
> >transceivers, etc...
> >
> >Other customers of 10 Gb/s Ethernet addressing the reflector and the task
> >group have
> >voiced strong support for the inclusion of a low-cost short-reach
> >multimode fiber objective
> >even if it included the use of higher bandwidth MMF. The task group
> >responded to these
> >clearly stated customer requirements by including in the current set of
> >objectives a physical
> >layer specification for operation over 300 m of MMF. Omission of the word
> >"installed" was to
> >implicitly allow for the new higher bandwidth MMF fiber. The usage of the
> >word "installed" in the
> >100 meter objective was to identify the MMF with the MMF currently
> >specified in 802.3z.
> >
> >In order to clearly identify the current implicit differences in the MMF
> >objective fiber types,
> >
> >I offer the following definitions.
> >
> >+++++Installed MMF MMF as specified in 802.3z.
> >+++++MMF Either installed MMF or the Next Generation MMF fiber
>specifications
> >currently proposed in both TIA and ISO. The development for these
> >specification
> >was supported in a Liaison letter issued from IEEE.
> >
> >A low-cost serial 850 nm PMD option coupled with the benefits of the
> >higher bandwidth
> >300 meter multimode fiber solution will addresses your requirements for
> >cost, simplicity, and compatibility
> >with your current Ethernet (10 Mb/s-100 Mb/s-1 Gb/s) distances and for the
> >10 Gb/s Ethernet
> >distances. Additionally, the new MMF coupled with the right PMD would
> >allow for next generation
> >40 Gb/s Ethernet applications.
> >
> >The impact of media selection on technology deployment can be severe.
> >The debate over driving single mode versus higher performance multimode for
> >new "in the building" LAN installations has the same flavor as coax versus
> >twisted-pair.
> >Before coming to CDT, I had worked at Digital Equipment Corporation for
> >almost 20 years.
> >DEC lost the Ethernet repeater business (coax) primarily due to its
> >slowness in responding
> >to the customer requirements for Ethernet over twisted-pair. DEC said,
> >"coax is technology
> >proof and will meet all of your long term application needs", the customer
> >said, "but my
> >reason for wanting twisted-pair is overall cost (installation, testing,
> >materials), simplicity, and
> >compatibility with my current applications (technology and distances). The
> >rest is history.
> >
> >
> >Chris Di Minico
> >Cable Design Technologies (CDT) Corporation
> >Director of Network Systems Technology
> >Phone: 800-422-9961 ext:333
> >e-mail: <mailto:cd@xxxxxxxxxxxxxx>cd@xxxxxxxxxxxxxx
> >
> >
> >>----- Original Message -----
> >>From: <mailto:Corey@xxxxxxxxx>McCormick, Corey
> >>To: <mailto:stds-802-3-hssg@xxxxxxxx>stds-802-3-hssg@xxxxxxxx
> >>Sent: Thursday, July 27, 2000 12:31 AM
> >>Subject: RE: Equalization and benefits of Parallel Optics.
> >>
> >>I also may be a bit confused.  From a business perspective I have this
>view.
> >>
> >>My reason for wanting a 10G interface over MMF is primarily cost and
> >>simplicity.  Most of the servers I have installed are within 100M and
> >>most of the core and distribution switches are as well.  If there is a
> >>low-cost way to use some new-fangled media, then fine, but it seems to me
> >>that improving ASIC technologies and economies of scale are the primary
> >>downward factors in interface technologies.
> >>
> >>If the MMF limit is 100M or less then the pain incurred for me installing
> >>new MMF is relatively minor, as the distance is not that large.  This
> >>means the number of labor-intensive obstacles encountered will be
> >>small.  It is work and cost to be sure, but if the runs were for
> >>200-500M+ then the labor costs would be *much* higher.  However, I
> >>believe the costs for the tooling, cables, certification gear and
> >>connectors will increase if we choose some new radically different
> >>technology as the only choice.  In our experience the SFF connectors are
> >>not significantly less in overall cost.  (there are exceptions, but for
> >>the majority of them the costs are quite similar to ST/SC)  We still have
> >>*much* more difficulty with the SFF installations due to primarily lack
> >>of available cables, field termination components, and conversion
> >>cables.  Also, there is the major problem of field Zip<->Dual fiber MM
> >>adaptation to our installed ST/SC infrastructure (yuk!).
> >>
> >>I really do not care which technology is selected/specified, but for the
> >>short-haul standard my primary goal is lowest overall cost for the total
> >>installation.  (Labor, connectors, tools, patch cables, test equipment,
> >>Laser/LED transceivers, etc...)  I care very little about which form
> >>factor, mostly the cost and ease of use.
> >>
> >>If such relatively simple Net things as the broken 10/100 Autoneg Phy and
> >>LX mode adaptation/conditioning cables are such a problem in the wide
> >>acceptance of new technologies, then it seems like the KISS principle
> >>should be a strong factor.  I do not care how complicated it is
> >>internally, but it needs to be simple for the end user.
> >>
> >>I also seems to remember that the goal was 3X the cost of 1G.  If the
> >>cable length limits are going to be <100M, then the
> >>real-world-end-user-makes-the-comparison will be with 1000Base-TX copper,
> >>not SX.  This might make it much more difficult to complete the 3X cost
> >>target unless there are *significant* savings in the
> >>Phy/Xceiver/cable/connector/tools area.
> >>
> >>My engineering hat does not always agree with this, but then it is
> >>business that pays the bills.
> >>
> >>What do you good folks think?
> >>
> >>Corey McCormick
> >>CITGO Petroleum
> >>
> >>  -----Original Message-----
> >>From:   Booth, Bradley
> >>[<mailto:bradley.booth@xxxxxxxxx>mailto:bradley.booth@xxxxxxxxx]
> >>Sent:   Wednesday, July 26, 2000 8:30 PM
> >>To:     stds-802-3-hssg@xxxxxxxx
> >>Subject:        RE: Equalization and benefits of Parallel Optics.
> >>
> >>I have one question:
> >>
> >>Which of our distance objectives is satisfied with parallel fiber and
> >>parallel optics?
> >>
> >>It has been my interpretation that when we talked about 100m of installed
> >>base of MMF, that we were referring to the MMF fiber currently available
>for
> >>use by 802.3z.  Parallel optics does not operate over this installed base.
> >>
> >>Or am I missing the point here?
> >>
> >>Cheers,
> >>Brad
> >>
> >>         -----Original Message-----
> >>         From:   ghiasi [SMTP:Ali.Ghiasi@xxxxxxxxxxx]
> >>         Sent:   Tuesday, July 25, 2000 8:32 PM
> >>         To:     stds-802-3-hssg@xxxxxxxx; Daljeet_Mundae@xxxxxxxxx;
> >>hakimi@xxxxxxxxxx
> >>         Cc:     Ali.Ghiasi@xxxxxxxxxxx
> >>         Subject:        RE: Equalization and benefits of Parallel Optics.
> >>
> >>         Sharam
> >>
> >>         > From: "Hakimi, Sharam (Sharam)" <hakimi@xxxxxxxxxx>
> >>         > To: stds-802-3-hssg@xxxxxxxx, "'Daljeet_Mundae@xxxxxxxxx'"
> >>         <Daljeet_Mundae@xxxxxxxxx>
> >>         > Subject: RE: Equalization and benefits of Parallel Optics.
> >>         > Date: Tue, 25 Jul 2000 21:04:49 -0400
> >>         > MIME-Version: 1.0
> >>         > X-Resent-To: Multiple Recipients
> >><stds-802-3-hssg@xxxxxxxxxxxxxxxxxx>
> >>         > X-Listname: stds-802-3-hssg
> >>         > X-Info: [Un]Subscribe requests to  majordomo@xxxxxxxxxxxxxxxxxx
> >>         > X-Moderator-Address:
>stds-802-3-hssg-approval@xxxxxxxxxxxxxxxxxx
> >>         >
> >>         >
> >>         > Although parallel fiber is technically an easier solution, the
> >>major reason
> >>         > for support of 850nm has been to consider the installed base,
>and
> >>cost. If
> >>         > users have to pull new fiber, IMHO, parallel fiber would not
> >> be on
> >>top of
> >>         > the list and most of installed base is single fiber.
> >>
> >>         I did not suggest to pull any new fiber.  Limit the shortwave
> >>variant
> >>         including parallel optics to the data center with 100 m radius.
> >>
> >>         Thanks,
> >>
> >>         Ali Ghiasi
> >>         Sun Microsytems
> >>
> >>         >
> >>         > Sharam Hakimi
> >>         > Lucent Technologies
> >>         >
> >>