[10GBT] Questions regarding error rates for LDPC & PAM modulation
If you are not deeply involved in PAM modulation or LDPC codes, you
should probably switch off now. Many of the following will appear
meaningless to 99.9% of the world's population.
If you are still reading at this point you have my sympathy :-) In order
to make a realistic analysis of the mean time to false packet
acceptance, we need to understand how PAM modulation and LDPC coding
generates errors. I have a list of questions to kick off the discussion,
as we progress on the subject I will gladly sponsor ad-hoc calls or
meetings to discuss the false packet acceptance rate.
1. PAM modulation
At a given SNR, we have a corresponding BER (or probability of error).
However, with more than 1 bit per symbol, there should be separate
probabilities for P(single bit error), P(double bit error) - where this
is not simply P(single bit error) squared. Also, I would expect some
correlation between errors on one pair and errors on the other pairs.
For the PAM-12 proposal there are 2 coded bits on each symbol, so I
expect that there will be a set of probabilities for 1 bit error;
multiple bit errors on multiple pairs; and multiple bits on one pair
(also with errors on other pairs). Finally, the uncoded bits on the same
symbols should have a higher (effective) SNR but I expect that there
will be a BER for those bits. Bit errors on uncoded bits should
correlate (strongly) with errors on the coded bits.
Could someone post a detailed analysis of these factors when examining
the error rates of PAM modulated (especially the adopted PAM) systems?
2. LDPC coding
I am more familiar with RS coding than LDPC, but I understand that the
proposed LDPC codes are based on RS (rather than irregular codes). Is
there an indication of "detected, uncorrected" errors in an LDPC code
block? If so, what are the corresponding rates for undetected errors for
LDPC? Do the BER simulations presented represent "undetected,
uncorrected" errors? Also, is the error correction and detection
mechanism favorable to bytes (as opposed to bits)? In other words should
we be looking at the error rate in terms of byte errors (or other unit
size)? Finally, I assume that any block code will have a strong
likelihood of multiple errors. For a given P(error) in a codeblock, what
is the P(second error)? Is there any spatial relationship between errors
within the block?
Could someone post a detailed analysis of the nature of errors after
This effort will tae some serious though, many thanks for your input on