Traditionally, the design of codebooks used in compression proceeds from specifications that are either statistical or syntactic. The quintessential statistical approach is represented by Huffman codes, in which symbols are ranked according to their frequencies and then assigned in order of decreasing probability to longer and longer codewords. In a syntactic approach, the codebook is built out of patterns that display certain features, e.g., of robustness in the face of noise, loss of synchronization, etc. The focal point in these developments is the structure of the codewords. For instance, a codeword is a pattern w of length m such that any other codeword must be at a distance of d from w, the distance being measured in terms of errors of a certain type. We can have only substitutions in the Hamming variant, substitutions, insertions and deletions in the Levensthein variant, and so on. Of course, the two aspects blend in the final code. With Huffmann codes, for instance, once the characters are statistically ranked a code with certain syntactic characteristics, notably, obeying the prefix property, is built. Likewise, once the codebook of an error correcting code is designed, the statistics of the source is taken into account for encoding. However, these two stages are, as a rule, carried out somewhat independently.