|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
-< I apologize for the transgression of IEEE etiquette in including real Web-published public prices in the previous version of the posting. I hope I have not offended anyone too terribly as I was not aware of the restriction. >-
Sure thing. The 0.5M spec is for one of our vendors (proprietary Cisco GigaStack) GBIC that uses something akin to the FireWire cable for Single GigE or Dual GigE switch-to-switch connections in a single GBIC slot. It is not 1000BASE-CX, but at least it is inexpensive (1/2 the cost of an SX GBIC) and available for closet/rack interconnects. We do not use them today though due to the distance limits and lack of NIC support for the interface.
The 1000BASE-CX is supporting longer links but costs almost the same as 1000BASE-SX GBICs (lack of volume drives the cost up I suspect). However, 1000BASE-CX is extremely rare in our experience. The majority of our NIC vendors (3Com, IBM, Compaq, Alteon), provide almost exclusively SX NICs without the benefit of using a GBIC slot, and this has further limited our practical choices. The cost of a 1000BASE-SX NIC with a fixed (non-GBIC interface) is very near our cost of the SX GBIC as a stand-alone part. Today the NIC's are ~1X in cost and the separate SX-GBICs are ~0.5-0.9X. (We of course have a few proprietary NICs @ 2X-8X each above the norm, but thankfully they are the exceptions.) Even obtaining 1000BASE-CX GBICs has been tough, much less getting them supported by another vendor's GBIC interface. GBICs can be great, but it does not yet share the compatibility level of 10Base AUI, or even 100Base MII in our day-to-day lives. Just because the connector fits, doesn't mean the link works well...
I suspect that the promise of 1000Base-TX pretty much killed the 1000Base-CX market and it's development, but with no TX standard likely for 10G (I will trust all you in-the-trenches-EE-types for that insight), the CX option should be much more popular I believe. To a large extent I think this will depend on cost (again) as we obviously need both ends of the links to support the same interface media and they are under different market pressures I believe. Cost is always an issue, but packaging on the NIC side is much less of a problem that on the switch side.
As market pressure/competition has brought prices down and density up for the GigE switches, we are seeing the similar things in that market as well. The packaging and cost issues seem to pushing our vendors towards the small-footprint connectors which preclude the use of the much larger but more convenient and expensive GBIC/SC connector housing. In the standalone/pizza-box (1U-5U in height) GigE switches, GBICs are still common, but in the Slot-based switching chassis the GBIC interface looks to be fading. There are exceptions for dedicated uplink ports where the GBIC's flexibility seems to be of prime importance.
If the cost differential for any copper spec 10GigE over the same fiber solution is very large, I believe it will be very popular if the distance is great enough to cover much of the installed data center topologies. Our main data center is ~75M across and we use two central switching locations. So, for us 25M will do many of our connections. (~50% I would think)
We have architected our data center clusters around other fairly short maximum lengths such as High-Voltage Differential SCSI, Low-Voltage Differential SCSI, IBM's SSA Serial Disk architecture, etc... so this would be nothing unfamiliar.
Hope this helps,