Hexadecimal
The hexadecimal number system (base 16) uses sixteen different symbols, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E and F, to represent numbers, because hex is a 16 base number system it would significantly reduce the size of a number.
For example:
The decimal (10 base) number ‘123456789’ is 9 characters long.
In binary (2 base) the equivalent number is ‘111010110111100110100010101’ is 27 characters long.
In Hex (16 base) the equivalent number is ‘75BCD15’ is 7 characters long.
Hex is used for a many different things, such as memory addresses, and colours (hex values of them) also URL encoding. The reason hex is used instead of decimal is the value which can be stored in a number of characters in hex is much greater than that of decimal. for instance, a 4 digit number in decimal can store 9999 possible values. When you use hex for a 4 digit number there are 65536 possible values. Hex reduces the amount of numbers in a decimal number. This can make it shorter and is often easier to type in. in assembly language code because of this.
Floating Point Numbers
The floating point binary storage formats used my Intel processors were standardized by the IEEE organisation. IEEE 32 bit and 64 bit. "Floating Point" refers to the decimal point. Since there can be any number of digits before and after the decimal, the point "floats". The floating-point unit performs arithmetic operations on decimal numbers. This is and example of a shorter 32 bit number. It is a three-part representation of a number that contains a decimal point.
The Sign
A single bit represents the sign of a binary floating-point number. A 1 bit indicates a negative number, and a 0 bit indicates a positive number.
The Exponent
This indicates the number of time the number is multiplied by itself.
The Mantissa
This is the positive fractional part of the representation of a logarithm; in the expression log 643 = 2.808 the mantissa is .808. This represents where the decimal point is.
Unicode
Unicode is an international character set developed for the scripts of all the languages of the world. It is used for communicating and exchanging data between
Computer systems. The power of Unicode is that it enables human communication across languages, cultures and geographic areas. This code is very complex to understand but is a very valuable universal code.
Unicode was devised so that one unique code is used to represent each character, even if that character is used in multiple languages.
The ASCII encoding system, using 7 bits, is sufficient to represent the ordinary Latin alphabet and was developed in the 1950s. Today most computers use 8 bits to represent a character, as this is not sufficient to represent additional characters, like accent marks.
ASCII
(American Standard Code for Information Interchange) ASCII is the worldwide standard for the code numbers used by computers to represent all the upper and lower-case Latin letters, numbers, punctuation, etc. There are 128 standard ASCII codes each of which can be represented by a 7 digit binary number. The reason ASCII uses numbers to represent characters is because computers can only understand binary numbers.
Below is an ASCII conversion table.
Extended ASCII
A set of codes that extends the basic ASCII set. The basic ASCII set uses 7 bits for each character, giving it a total of 128 unique symbols. The extended ASCII character set uses 8 bits, which gives it an additional 128 characters. The extra characters represent characters from foreign languages and special symbols for drawing pictures.
Possible Errors Using Machine Language
Binary
Misinterpretation can problem when using binary. Numbers can become too long for humans to understand as binary is only a two base number system. This is why humans prefer to use a decimal number system.
16Bit and 32Bit binary can lead to problems if mixed together in a computers registry. If a 16Bit binary number was fed in to a 32Bit registry system the expected output will be different which could lead to many other calculation errors.
Signed Numbers
The representation of signed numbers presents more problems. Signed numbers have small number limits from –127 to +127. If a computer expects a signed number but and a normal binary number is inputted the overall outcome could be different than expected.
Floating Point
Floating Point number systems have different standards. The main standard is intel’s IEEE floating point system. There are also other standards of floating point used with other processors. This can lead to confusion with storage if you change conversion methods.