Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Clarifcation: 32bit interface is 32 bits of Rx




Simon, I now see I mis-understood your earlier message.  When I say 32 bit
interface, I thought 32 Rd pins, not 16 Rd's and 16 Td pins.  
Sorry,
Shawn

-----Original Message-----
From: Rogers, Shawn 
Sent: Tuesday, June 08, 1999 3:53 PM
To: 'Simon L. Sabato'
Cc: 'stds-802-3-hssg@ieee.org'
Subject: 32bit interface is not Slow!



Simon, I agree with your point that a 32bit interface is not slow.  However,
a 32-bit interface would require buffer frequencies of 156.25Mhz, which is
within today's ASIC capability.  The one exception is the clocks (Tx and Rx)
which, if Tsu/Th were done around the rising edge would be required to run
at 312.5Mhz.  The alternative is to have Tsu/Th of data done around both
rising and falling edge (aka Double Clock) to keep the clock frequency
within CMOS limits.  
The non-trivial part about this is the bit time.  By this I mean you have to
get the data across the interface (that's driven from one device to latched
on the other device) in around 3.2ns.  Currently 802.3z has an 8ns bit time,
and there have been some challenges with even that.  Still, I believe a 32
bit interface is within today's technical feasibility and other the high
number of pins, is cost effective.

Looking forward:  

A 16 bit interface would require a 1.6ns bit time.  This 312.5Mhz buffer
frequencies requires small signal swing technology (RAMBUS is one example,
though I do not endorse this) severely limiting trace lengths and I'm sure a
lot of other things.  The clocks would likely need to be differential.  It
is challenging, but is do-able. I question whether the cost benefit
outweighs the complexity and limitations in the near term.  

An 8 bit interface would require a 0.8ns bit time.  I don't know of any
technology capable of doing this single ended today - great IP play if
someone has this.  Within the next few years this is likely to require
differential signally, so you loose the pin reduction benefit. Also
necessary to consider is the Automated Test Equipement (ATE) limitations.
Above 800Mhz data rates, the ATE equipment is very limited and VERY
expensive!

My gut feeling is a 32 bit interface with an option for 16 is within the
scope of the standard.  However to consider and 8 bit interface, I would
expect someone would have to prove technical feasibility first.

Regards,
Shawn

-----Original Message-----
From: Simon L. Sabato [mailto:simons@level1.com]
Sent: Tuesday, June 08, 1999 12:08 PM
To: Jaime Kardontchik
Cc: 'stds-802-3-hssg@ieee.org'
Subject: Re: 10G-BASE-T question




10 Gig'ers, 

Even before coding, a 32-bit interface already requires I/O speeds of
300+ MHz.  Is it even possible (or will it be in the required timeframe)
to run a non-differential synchronous bus at 1.25GHz across a PCB at
reasonable cost/EMI?  Also, will one fourth the pins running four times
as fast be any quieter for the receiver? (This isn't a rhetorical
question, I'm out of my turf). 

In 100Mb the MII interface evolved to lower pincount versions as
"standard" IC processes improved.  This same model could be followed.  I
don't think that we'll build extremely pad-limited chips out of a desire
to stick with a standard interface.  

An alternative would be to define a lower-pincount interface from the
start.  I think that we'd end up seeing GaAs or SiGe "bridge" chips
which then take the narrow/fast bus and convert it to a wide/slow (300+
MHz? slow?) bus.  This "bridge" could then be sucked into the chip
holding the MAC as 1GHz+ I/O busses become available.  This way we could
avoid the current situation in 100Mb where many are moving away from the
IEEE standard MII in search of more cost effective alternatives.  This
begs the question, is it more important for the standard to include an
interface which is cost effective today, or more viable in the future? 
It is my opinion that the former helps a successful introduction, wheras
the latter will tend to take care of itself.  

Perhaps a 600+MHz 16-bit interface would be a good compromise.  RDRAM
interfaces are (barely?) manufacturable in volume today... if I'm not
mistaken they offer 800MHz across 16-bits.  They built a nice patent
portfolio on the technology required to do this, although they are
building a multidrop bus rather than a point-to-point connection.  They
also require hard IP cores, custom to the foundry, to get these speeds
in CMOS.  By the time 10Gig chips go into development, 600+MHz may be
quite reasonable in a more standard design. 

I'm concerned, though, that above 300MHz too much time may be spent
specifying, simulating, and trying to build the 10GMII interface rather
than the other side of the PHY.  And, at least at first, the chips for
10G systems are going to be plenty big enough to support the extra
pins.  Thoughts? 

-Simon L. Sabato
-Level One Communications 


Jaime Kardontchik wrote:
> 
> Rogers,
> 
> The figure on page 4 emphasizes more the maximum clock used in the
> 10G-BASE-T architecture, 1.25 GHz, and the maximum baud rate
> in the optical fiber, 1.25 Gbaud/sec.
> 
> The actual width of the MII interface is a question open to discussion.
> 
> Shimon Muller (Sun) suggested using a 32-bit wide interface (64-bit
> wide if we include both the Tx and Rx). Dan Dove (HP), in the audience,
> suggested that if we use a 32-bit wide interface we might end up with
> a chip that is all I/Os surrounding a tiny design, and he suggested to
> take here an agressive approach and stick to an 8-bit wide interface.
> 
> I tend to agree with Dan for the same reason and for another one:
> 32 TTL-type output drivers at the Rx would introduce a lot of
> switching noise that could affect the analog blocks in the chip,
> including the jitter of the transmitter.
> 
> Jaime
> 
> Jaime E. Kardontchik
> Micro Linear
> San Jose, CA 95131
> email: kardontchik.jaime@ulinear.com
> 
> "Rogers, Shawn" wrote:
> 
> > Jaime, I have a question concerning your presentation in Idaho.  On page
4
> > of your presentation you state the following when comparing your
10G-Base-T
> > proposal to 802.3ab (1000Base-T):
> >
> >    1000Base-T           10G-Base-T
> >     GMII-8bit wide      10GMII - same
> >
> > Are you advocating a byte wide chip-to-chip interface between the PCS
and
> > Reconciliation sublayer in the MAC running at 1.25Ghz?
> >
> > Regards,
> > Shawn
> >
> > -----Original Message-----
> > From: Jaime Kardontchik [mailto:kardontchik.jaime@ulinear.com]
> > Sent: Monday, June 07, 1999 5:57 PM
> > To: stds-802-3-hssg@ieee.org
> > Subject: 10G-BASE-T presentation
> >
> > Hello 10G'ers,
> >
> > For those that were not able to attend the Idaho meeting:
> >
> > The presentation on the 10G-BASE-T architecture given
> > in Idaho included more material than the original posted
> > two weeks ago.
> >
> > The updated presentation as given in Idaho is now in the
> > web site,  replacing the old one:
> >
> > http://grouper.ieee.org/groups/802/3/10G_study/public/june99
> >
> > Jaime
> >
> > Jaime E. Kardontchik
> > Micro Linear
> > San Jose, CA 95131
> > email: kardontchik.jaime@ulinear.com