@developertn: If you are used to assembly then perhaps this explanation will make you happy. 16-bit, 32-bit, etc. refer to the largest value you can store in a general purpose CPU register. In 16-bit real mode, the general registers are AX, BX, CX, DX, etc. These are all 16-bit registers, able to store values between 0 and 65535, hence the CPU and all software running in this mode is 16-bit. You can perform 32-bit operations in this mode, but typically you have to combine two registers to do so.
Once you switch the CPU to 32-bit mode, the general purpose registers become 32-bits in size. To avoid confusion they are called EAX, EBX, ECX, etc., but they can now store 32-bit values (between 0 and around 4 billion.) Programs running in this mode are 32-bit programs, because the largest numbers they can manipulate directly on the CPU are 32-bits in size.
Again, when you switch the CPU to 64-bit mode, the registers become 64-bits in size and they are called RAX, RBX, RCX and so on. They can now store values between 0 and 18,446,744,073,709,551,615, which is a very large number.
So the reason why a byte is always 8-bits in all modes is because the number of bits doesn't refer to the smallest unit (a byte), but in fact it describes the largest unit - the biggest integer variable you can have on the CPU itself.
If you are a C programmer you can see this yourself, as the "int" data type is frequently stored as a CPU register. If you use the sizeof() operator you can find out how many bytes it takes to store an int, for example:
Code: Select all
int a = 0;
printf("sizeof(int) is %d bytes (%d bits)\n", sizeof(a), sizeof(a) * 8);
If you compile this under real mode DOS it will print "
sizeof(int) is 2 bytes (16 bits)". Compile it as a 32-bit Windows console program and it will say "
sizeof(int) is 4 bytes (32 bits)". The same will happen for 64-bits.
Hopefully this makes sense.