64-bit Processors is an adjective used to indicate the Microarchitecture of CPU and ALU to describe the registers, address buses or data buses and as whole instructions of 64 bits (8 octets). We talked about 32 bit (x86) and 64 bit (x64) Virtualization in a previous article. From the perspective of software, 64-bit computing means using code with virtual memory addresses of 64-bit.
64-bit processors existed in supercomputers since 1960 and on servers, workstations based on RISC since the mid 1990s . In 2003 in the personal computer architectures x86-64 processors and PowerPC G5 began to be introduced massively. Although a CPU can be internally of 64 bits, the data bus or address bus outsiders may have a different size, larger or smaller and the term is also commonly used to describe the size of these buses. For example, many current machines with 32-bit processors use 64-bit buses (e.g. Pentium CPUs) and can occasionally be known as 64-bit processors for this reason. The term can also refer to the size of the instructions in the instruction set or any other data element . Without further qualifications, however, the computer architecture has integrated 64-bit registers that are of 64 bits, which can process (internal and external) 64-bit data.
64-bit Processors : Implications of the Architecture
The processor records are generally divided into three groups : integer, floating point and other. In all general purpose processors, only integer registers can store pointers (an address of some data in memory). Records that are not integers can be used to store to read or write memory and therefore can not be used to avoid any restrictions imposed by the size of the integer registers.
Almost all general purpose processors (with the notable exception of many ARM and implementations MIPS 32-bit) have integrated floating point hardware, which may or may not use 64-bit registers to transport data in order to process them. For example, the X86 architecture includes x87 floating point instructions which uses eight 80-bit registers in a configuration in the form of stack ; subsequent revisions of the x86 and x86-64 architecture also includes SSE instructions which uses eight 128-bit registers (16 registers in the x86-64).Advertisement
In contrast, 64-bit processor family from DEC Alpha defines 32 registers of 64-bit floating points in addition to its 32 registers, 64-bit integer. It should be noted that the speed is not the only factor to consider for the comparison of 32-bit processors and 64-bit processors. Uses such as multitasking, load testing and clustering (for HPC ) may be more suitable for a 64-bit architecture for a given the correct development. The 64-bit clusters have been widely used in large organizations such as IBM, Vodafone, HP and Microsoft, for this reason. While 64-bit architectures indisputably make it easier to work with editing applications such as digital video, scientific computing and large databases – there has been considerable debate about whether compatibility modes will be faster for 32-bit systems of the same price for other tasks. Most 32-bit operating systems and applications can run smoothly on 64-bit hardware.
Memory Limitations of 64-bit Processors
64-bit processor can address theoretically up to 16 exabytes (EB is equivalent to 1018 bytes) memory, whereas only 32-bit processor can address 4 GB (EB is equivalent to 109 bytes). Many CPUs are currently designed so that the contents of a single register can store the memory address of any data in virtual memory. Therefore, the total number of addresses in virtual memory is the total amount of data that the computer can keep in your work area is determined by the width of these records. Beginning in the 1960s with the IBM S/360 , the computer VAX in 1970, then with the Intel 80386 in the mid- 1980s , the de facto consensus established that 32 bits was a registration of convenient size. At the time these architectures were designed, 4 gigabytes of memory were far from being available in facilities that were considered sufficient “space” for addressing. 4 gigabytes is considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physical things in accounting applications like databases.
However, with the passage of time and continued reductions in the cost of memory, in the early 1990s, RAM close to the 4 gigabytes began appearing and began to be desirable to use virtual memory spaces exceeding the limit of 4 gigabytes to handle certain types of problems. Many 64-bit PC market currently have an artificial limit on the amount of memory that can recognize, because physical constraints make it very unlikely that they will need support for the 16 exabytes maximum capacity. The Apple Mac Pro, for example, originally was manufactured so that it can be configured with up to 32 gigabytes of physical memory and therefore no need for support beyond that amount. According to Apple’s new version of its operating system theoretically 16 Terabytes of memory addresses is supported.