Order Tray | Contact Us | Home | SIG Lists

[ax25-layer2] 7-byte address proposal

Glenn Thomas glennt at charter.net
Thu Aug 3 09:55:45 UTC 2006


At 12:05 AM 8/3/2006, you wrote:

>I've been wanting to experiment with a Reed-Solomon concatenated ECC
>extension to AX.25.  I'm thinking a standard AX.25 frame could be followed
>by the ECC frame, which would have the compliment of the data frame's FCS.
>That'd cause old TNCs to ignore it completely, and would let supported TNCs
>be sure that the ECC block belongs to a particular data frame.
>
>If I understand the ECC scheme correctly (I haven't implemented one yet)
>then a (255,223) code would add 32 bytes of data.  Add a few more for the
>FCS complement, and you've added 30 msec of airtime to a 1200 baud
>transmission, and you can correct up to 16 symbol errors in a 223-byte data
>frame.  More, if you have side information from the demodulator.  That'd be
>a huge improvement for APRS - I get marginal copy all the time where only a
>few bits get flipped and trash the whole frame.
>
>Scott

Lessee... the main consideration here is the probability of a packet 
being corrupted against the cost of correcting it by various means. 
Note that this is likely a ax.25 v3.0 issue because a legacy TNC is 
unlikely to be able to cope with EDAC without a significant firmware upgrade.

To simplify the discussion, let me make the assumption that an EDAC'd 
packet, even one with errors in it, will always decode to a correct 
packet and that any non-EDAC'd packet will be corrected by a single 
retry. Reality is more complicated of course, but I can at least make 
the point with this grossly simplified case.

In the EDAC'd case described, packets never need to be retransmitted. 
However, single error correct, double error detect (several LSI chips 
are available that will do this. They're used mainly in high 
reliability RAM systems) imposes an overhead of approximately 23% 
(20,6). Thus for every 20 data bits transmitted, an additional 6 
syndrome bits have to be transmitted.

In the non-EDAC case described the cost to correct by retry is:

  <p> * (<b> + <n>)

where <p> is the probability of a given packet being corrupted
       <b> is the time (in bits) required to do the retry,
           including the usual packet overheads due to TXD, TXTAIL, etc.
       <n> is the time (in bits) required to do send the NAK,
           including the usual packet overheads due to TXD, TXTAIL, etc.

I don't have good estimates for <b> and <n> as most AX.25 systems are 
set uniquely.
An estimate of <p> is a matter of the power, the path, antennas, the 
weather... Perhaps someone with more time can do a study to explore 
the <p><b><n> space and see where the breakpoint is between EDAC and 
retry. That study would be a good piece of knowledge to have.

This is not an unreasonable model. Presumably the probability of an 
uncorrectable EDAC'd packet <pu> is low enough that it's 
retransmission cost <pu>*(<b'> + <n'>) is negligible. In the same 
high-error environment, multiple retries of the non-EDAC'd packet can 
be expected. This is a refinement that whoever chooses to do the 
breakpoint study can explore.

Another consideration is how packets tend to be corrupted. Most 
systems seem to assume that the probability of any particular bit 
being corrupted is the same as that of any other bit being corrupted. 
In a reasonably benign network situation this may be close to the 
truth. Of course in a benign situation, <p> is usually very small. 
Thus retry (if available) is very likely to be more effective than EDAC.

In a real world channel, bit errors tend to cluster, so an EDAC 
scheme needs to handle the case where, perhaps, one word is complete 
trash and the rest of the message is errorless. There are several 
ways to handle this, one of the simplest is to redistribute the data 
& syndrome bits for each EDAC'd word to put the maximum distance 
between them in the message. When the receiver shuffles the bits back 
into their original order, SBEs (single bit errors) will remain SBEs 
while error clusters will be spread across the message as a number of SBEs.

For similar reasons, it seems unwise to put all the syndrome bits in 
a block at the end of the packet where they are more susceptible to 
error clusters.

Someone mentioned that one would have to decode the entire packet 
just to read the EDAC'd header. Even if true (it's not), it's 
irrelevant. In AX.25 V2.2 it's necessary to run HDLC over the entire 
packet to validate it before you even consider looking at what you 
think is the header.

Finally, if we choose to adopt an EDAC scheme, then we no longer need 
HDLC. The only reason we have HDLC is error detection. If EDAC is 
already providing this, we don't need HDLC too.

There is some value of <p-breakpoint>, below which retry is more 
efficient and above which EDAC is more efficient. The trick now is to 
find out what <p-breakpoint> is and then compare it with the range of 
<p> that we see on real packet links. Oh... don't forget that while 
<p> may be fairly low on a standard 2m FM-packet link, it may be 
rather high on an HF circuit. In fact, <p> may be as much a matter of 
layer one considerations as anything else.

73 de Glenn wb6w



WAR IS PEACE!
FREEDOM IS SLAVERY!
IGNORANCE IS STRENGTH!
(be seeing you!)





More information about the ax25-layer2 mailing list