RE: Link Status thoughts
Thanks for your feedback. Please find my comments below.
At 17:21 00/11/03 -0700, pat_thaler@xxxxxxxxxxx wrote:
> Therefore, I think we could agree that the Local Fault signal (or Break
> Link in my term) on the receive path will be responded by the Remote
> Fault signal on the transmit path at the sublayer that the RF
> mechanism is implemented.
> <PAT> Yes, I could agree with that. However, you seem to use BL in
> a more complex way than this implies. The proposal you sent me Wednesday
> sends a BL whenever a reset is occurring. If I am interpreting correctly,
> it also gives a sublayer that detects a fault on the input for a direction
> the choice of sending either BL or idle without any LS columns. A receiver
> interprets failure of the LS heartbeat as a BL condition. I have a couple
> of problems with this:
> - If a device is seeing BL conditions, it has no way to determine whether
> the device at the other end is having intermittent transmitter problems
> or if it is just resetting frequently. (Maybe someone is just doing one
> of those annoying software installs that requires bunches of reboots :^(
> or are cycling power frequently.)
This is OAM&P; fault debugging in the remote device. I myself think
we had better support it, but my current interpretation of 'No OAM&P' doesn't
require this debugging. This is resolved at the Link Partner by its MDIO.
> - I have yet to hear any good argument for why one side of the link needs
> to know the other is resetting. The only case where we have cared about
> communicating a reset is to run auto-negotiation. 802.3 links that don't
> auto-negotiate don't have a way to send a "reset". Even if we know the
> other side is resetting, we don't have a use for that information. If
> one tries to use it to initiate ones own reset, then one gets into annoying
> problems specifying timers to keep reset from being an endless loop.
To avoid this endless loop, LSS mechanism has been designed that BL is
acknowledged by RF.
> So, my current position is that a device should not send a special signal
> when it is resetting. If at some point I get convinced that there is a reason
> that a device at one end of the link needs to tell the other end of the link
> that a reset is in progress, then that indication will need to be a different
> indication the the one used for "there is a disruption in this simplex
I believe that remote reset is indispensable in 10GbE, especially for 40km
links. Having no resetting mechanism is a nightmare for the large network
manager. In the real world, their is no perfect system. Traveling just for
pushing the reset button is .....
What I want to have is pushing the reset button of the Link Partner remotely,
and hopefully know its reaction; he reacts correctly (reset complete; Layer-1
is fine) or incorrectly (Layer-1 is gone).
> Note that Shimon, et al uses BL in yet another way. Receiving BL initiates
> a reset. I'm not convinced that it is necessary to be able to ask the other
> end of the link to reset, though I'm also not against providing a mechanism
> for it.
> So one of the fundamental issues that needs to be resolved before we
> are likely to get concensus is which of the following need to be supported
> by link signalling:
> A) The simplex link in this pair isn't working
> B) This simplex link isn't working
> C) There is a fault detected internal to this physical layer
> D) There is a physical fault that can't be isolated to this physical layer
> E) I'm resetting
> F) You should reset
> I believe we should do A and B rather than C and D. MDIO management can
> be used to determine if the fault that triggered A or B can be isolated
> to a particular sublayer. We seem to be getting dangerously close to
> concensus on this part. I'm lukewarm about supporting F and against
> supporting E. If we support E or F, each one should be distinctly
In my mind F) is on the same line with A) and B). I never propose E).
That's why I use link_reset (channel_reset) instead of reset.
C) D) is OAM&P which I have been recommending to this community for a
> <back to Osamu's text>
> I think the remaining different preferences between us on this RF/LF(BL)
> issue are:
> - Relay the RF&LF status, sublayer by sublayer, by defining each
> (RS/)XGXS/PCS-specific RF/LF signals.
> - In Local Fault condition, LF must be sent up the receive path.
> (NoLF means 'link is fine')
> - Use some variety of /Z/Z/Z/Z/ Column for signaling identification.
> - Define RF&LF(BL) mechanism only in RS. RF&LF(BL) signals be
> transparent in intermediate sublayers (XGXS/PCS) in the standard.
> Leave the practical instantiation point (chip) for the implementation.
> - In Local Fault condition, LF(BL) need not be sent up the receive path.
> NoLF means another Local Fault 'Link Signaling is NOT fine'.
> Instead, when 'link is fine', NoRF&NoLF should be passed at regular
> intervals as a heartbeat.
> - Use some variety of /Z/D/D/D/ Column for signaling identification.
> <PAT> RE: "Leave the practical instation point (chip) for the
> Any time you have:
> sublayer A
> sublayer B
> compatability interface
> sublayer C
> sublayer D
> then the practical instantiation point for a function is up to the
> implementer as long as the function occurs on the defined side of
> the compatability interface. For instance, if layer A is an RS
> and layer B is a DTE XGXS without an exposed XGMII interface between
> them, then a function that is defined as part of the RS could be
> performed by circuitry that is mostly doing XGXS tasks. In fact, the
> functions of the RS and XGXS could be all mixed together and its
> none of the standards business. The important thing is that when
> we look at whats going on on the compatability interface we see all
> the right behaviors. This is always true and if that is what you
> meant by the statement, I absolutely agrre.
Yes, this is exactly what I meant. Thank you for the better
> On the other hand, it wouldn't be correct to leave the function
> out of sublayer A and put it into sublayer C in instance above.
> If functions are moved across compatability interfaces, then the
> interface isn't providing compatability - a BAD thing. If the
> compatability interface is optional, one can of course implement
> sublayers A through D in an implementation and reorganize the
> funcitonality to ones heart's content.
> <back to Osamu>
> I would like to add my comment on a heartbeat. In New Orleans the
> main objection to this LSS nature seemed to come from the argument 'we
> already have Idles for a heartbeat', resulted in Y:5, N:29 ( A:>40)
> (straw poll). However, I am not yet convinced that many of them
> recognized the fact that 802.3ae will have multiple intermediate
> links such as XGXS-to-XGXS, PCS-to-PCS, and XGXS-to-XGXS. I could
> agree that Idles are best used as a heartbeat for each intermediate
> link. My argument here is that we had better adopt another heartbeat
> for overall link status since this minimizes the intermediate PCS/XGXS
> requirement on the standard; just producing Idles if they do not have
> input sync. Neither Idle Equivalent nor BL/RF relay at each PCS/XGXS
> is required. Note that this might work well even without a pin from
> PMD/PMA to PCS for out-of-band LF signaling, while I myself has no
> preference whether or not use a Pin here.
> <PAT> The reason one wants a pin from PMD/PMA to PCS for out-of-band
> LF signalling is partly because of the crosstalk problem. When a
> loss of signal condition occurs, then the sensitivity of the receiver
> may allow it to pick up crosstalk from the transmitter well enough
> that it gets good signal from it at least some of the time. A LF
> pin prevents the receiver from locking to that.
Thanks. I understand now. Now I support a 'Pin' here.
> I do not understand what you mean by "neither Idle Equivalent nor
> BL/RF relay at each PCS/XGXS is required." I thought your proposal
> was outputing idles from a sublayer when the input was not locked.
> Isn't that "Idle Equivalent"? Each sublayer in you proposal that
> is receiving BL/RF on an input sends BL/RF on its output. Isn't
> that "BL/RF relay"? It seems that the difference between the
> proposals is that in mine a sublayer that can't lock to its
> input sends out idle with BL inserted while in yours it has
> a choice of doing that or sending idle without BL at the cost
> of requiring sending a link status indicating a good link and
> a heartbeat checker for receiving link status. Either way should
> work and either can be implemented reasonably simply.
I admit to my too short sentence. Let me explain further here.
Idle Equivalent: Idles including RF or LF indication. Not pure Idle.
BL/RF relay: BL/RF translation from R-PCS to XGMII to XGXS, etc.
In your case XGXS should always understand what kind of Idles are
received from XGMII, and then translate it to the respective EMI-
friendly Idles. Need modification in the AKR state machine.
All the intermediate PCS/XGXS must have translation and generation
capability of this three (?) kinds of Idles.
In my case XGXS don't need to understand what kind of Idles are
received from XGMII; just send with the AKR state machine,
assuming that it is designed to be transparent with LS Column.
The intermediate PCS/XGXS don't need to generate LS Column.
In other words. In your case, each intermediate link has three speeds
of heartbeat; pure, with RF, with LF. Each sublayer has to recognize
these speed difference and generate his own three speeds of heartbeat.
Need a bit intelligence.
In my case, each intermediate link has a heartbeat: Idles. Overall
end-end link has another heartbeat, Link Status Columns. Each
intermediate sublayers don't need to recognize nor generate this LSS
heartbeat. They only need to generate a single speed Idle heartbeat.
Only the end sublayer should understand LSS heartbeat.
> Furthermore, this Local Fault signal (or Break Link in my term) will
> eventually be responded by Remote Fault signal at the RF mechanism
> in the Link Partner. This implies that the channel_reset issued by
> the Local Device's STA can be acknowledged by receiving this RF.
> <PAT> I don't see a use for this acknowledge protocol. Furthermore,
> if you are suggesting that the RF needs to be received before the
> reset can complete, then I even more strongly disagree. It would
> be adding a protocol without a purpose.
This might be a kind of OAM&P feature; knowing the information
whether Layer-1 is still alive or not. If we have this protocol,
we can make fault debugging even if remote reset is in fail
(MAC channel could not be recovered). How long we should send
this remote_reset signaling is another concern.
> So, if we design that the control register bit channel_reset is
> latched high until RF_rcvd, we can perform the complete reset of
> the duplex channel without employing Shimon's state synchronization
> process between the Local Device and the Link Partner. I have not
> yet convinced that we should employ such status synchronization that
> requires longer waiting time (-300ms) or unnecessary link-distance
> I believe that BL responded by RF is better than BL responded by BL.
> <PAT> Unfortunately with confliting schedules and meeting our editor
> commitments there has not been enough time spent on refining RF/BL
> proposals. I think Shimon's sync process could be simplified. I like
> the signalling in that proposal but would be in favor of removing
> the handshaking aspects.
I am not yet convinced that why we should restrict the available link
distance by the link status mechanism.
I would like to know the reason why we need the timer-based link-up
> At 12:55 00/11/01 -0700, pat_thaler@xxxxxxxxxxx wrote: