It converts an analog sample value (real number) into an N-bit binary representation. For purposes of this lecture, let's assume that this N-bits is a sign magnitude representation (slight modifications to this would be used in practice). The A/D converter then divides the range of inputs into 2^N-1 bins, where each bin is associated with each possible N-bit binary numbers. This is illustrated below:
Up to Digital and Quantization