1. 2's complement is a way to represent, in binary, both positive and negative integers. For positive numbers, you just write the number in binary; the leftmost bit must be 0, else the number is too big and cannot be represented in the given number of bits. For negative nos., you first write the corresponding positive number in binary, flip all the bits, and then add 1. The main advantage of this notation over sign-magnitude is that for arithmetic, positive and negative numbers can be treated uniformly; and it allows for easy check of equality. For overflow, the problem, of course, is having to handle negative numbers (as well as positive numbers). When adding two numbers, overflow won't occur if one number is positive and the other negative. If both have the same sign, we can check overflow by seeing if the sign bit of the result is the same as that of the operands (and if not, there is overflow). For subtraction, if the two operands are of same signs, there can be no overflow; if they have opposite signs, we compare the sign bit of the result with that of the first operand (from which the second is subtracted); if the signs don't match, we have overflow. 2. Huffman code uses variable-length encoding to use shorter codes for more frequently occuring characters at the expense of longer codes for less frequent ones. But one result is that it is not easy to determine the boundaries between characters. Instead, we rely on the "prefix property", i.e., no character's code can be the prefix of the code of any other character. Using this, one can create a suitable representation such as a tree for the codes of various characters, using which we can decode the Huffman-code-stream. The main advantage of the Huffman code is that, for many texts, its length will be shorter, perhaps much shorter, than if we used ASCII. The disadvantage is that for some cases, in particular those in which the text contains repeated occurrences of several characters that were expected to be infrequent, the performance will be worse.