The developers of processors have found over time that the processing speed of a processor can be accelerated not only by increasing the clock rate. Repeated delays happens during the execution of the program code. For example, when access to the memory or peripheral.
In such a case, the processor is busy and waiting. IT only makes the program code executing further, if the data from the memory or from the periphery were loaded into the registers. This latency means that a large part of the available computing power is not used at all.
Therefore, the processor developers have considered this thing since very early, as they have to have a structure of a processor so that it can continue to execute the program code while data is being loaded from outside. This has resulted in modern processors which distributes the tasks on many parallel operating units.
Levels of concurrency / parallelization
- Multi-core processor
- Grid / Cloud Computing
The command execution is the same as on an assembly line (pipeline). A pipeline is a series of processing units that execute an instruction. If a command of Phase 1 occurs in Phase 2, the next command phase enters first. The processing at each stage takes one clock cycle in the optimum case.
With multi-threading, the ability of a processor is that, it can run multiple calculations in parallel. Ideally, two or more actions or program processes running (threads) of the same length can be combined. The waiting time for the user, in such cases would be significantly reduced.
Unfortunately, not all the tasks parallelize equally well. It just also happens that a thread must wait for another. Algorithms that can be parallelized badly or poorly, waste time with elaborate calibration procedures. To make matters are the many bottlenecks in the system. For example, interfaces and mass storage access.
Is not always apparent to the programmer which functions are suitable for outsourcing in a separate thread. Parallelizing of functions ae required by the programmer with a completely different mindset. In real time strategy games, the game logic could operate independently of the graphical output and input processing. In image processing it is worth, for example, to transfer complex calculations in a separate thread. However, not always it can be implemented. Much of the calculations on the graphics card itself is running. In addition, the drivers for OpenGL and DirectX can only process one thread at a time.
SMT – Simultaneous Multi-Threading (Intel) versus Parallelization
Simultaneous multi-threading means that multiple threads are executed simultaneously. A thread is a thread or a code execution. It waits for a thread on data from the memory, then switches to another thread, which continues to use the free resources.
SMT is a natural choice because a thread with all the functional units can not utilize all alone in a processor. The capacity utilization is significantly improved at two concurrent threads. In addition, the execution paths of different threads are independent of each other. They rarely come in the way.
Because SMT works relatively well, you can waive the out-of-order technology. This is particularly noticeable in energy consumption. Single-thread processors with SMT simply consume less power.
Hyper-Threading is an Intel development and a precursor to the multi core processor. Hyper-Threading tricks the operating system from a second processor core, thereby making better use of the functional units and bridges memory latencies.
If the first thread of the processor can wait for data from the memory, the processor can use the second thread to make the program code to get executed further. If a sufficiently large cache and a good prefetching are present, then the chances are good that the waiting time can be usefully filled.
Prefetching is a method to pre-load and executed at the instruction and data.
The additional hardware requirements for Hyper-Threading is 20 percent and will bring 40 to 50 percent speed gain in multi-threaded mode.
The coprocessor is a special processor that extends the main processor to perform certain functions and relieves it. The coprocessor accelerated by the whole system. Over time, many conventional coprocessors functions have been integrated into the main processor.
In the multi-core technology, a plurality of cores (core) in a processor (CPU) are physically interconnecting. This means that modern processors have not only a computing unit, but several. We call these processors as a multi-core or multi-core processors. Externally multi-core CPUs are no different from single-core CPUs. Within the operating system of multi-core processor is treated as multiple units. Depending on the number of cores there are modified designations, which indicate how many cores are integrated into the processor.
Since the early 1990s, there are multiple processors in supercomputers with dedicated memory. These systems are referred to as multi-processor systems. In personal computers, several processors have not enforced. This area is dominated by multi-core processors.
Grid / Cloud Computing
The networking of computer systems for parallel computing and processing of data is referred to as the Grid. Often the grid is also known as a cloud.
So, this should be called as cloud computing. Like a dog in most case has a tail, then who has a tail is a dog (pun intended)! Cloud computing terminology’s status is basically not technically determined but is based on the consumption-distribution pattern.