The history of mankind is accelerating. Of the four billion years of existence of our planet, all known species are less than one hundred million years, humans have past less than a million years, the oldest picture on the wall of a cave is of 30,000 years. There are just 500 years, the invention of the printing press accelerated the dissemination of knowledge. 200 years separate us from the steam that caused the industrial revolution. With its 50 years, the computer is just a youngster. And yet what a comparison between the first computer (50 tons, 25 kW, a few thousand memory locations, hundreds of instructions per second) with the micro processor Pentium (a few grams, 25 watts, 8 to 32 megabytes of memory, 100 million instructions per second.) And everything suggests that in 10 years as the Pentium appear outdated as the first computer.
When it was invented, the computer was a laboratory curiosity. In the early 50’s, a market research famously estimated the world market at fifty machines. Today, the 200 million computers installed show that would be inconceivable without it in the contemporary industrial civilization. Since 1995, PCs sales more than televisions. During the time taken to read this text, the number of computers connected to the Internet, the network of networks, has increased by several thousands. How computer technology has revolutionized the intellectual and economic activity has no equivalent in other areas. A purely static description of techniques and results is therefore totally inadequate to understand the computer. A dynamic vision based on the major trends of evolution is essential to understand what happens at very short term.
The Evolution of Lightning Equipments
---
It is the evolution of technology components for over 40 years plays a crucial role in the development of computing.
Two nearly simultaneous events occurred during the late 40’s: In 1945, the invention of John von Neumann’s stored-program computer, and in 1948 the invention of the transistor by three researchers at Bell Laboratories.
In 1945, John von Neumann (1903-1957) invented the modern form of the recorded program. By 1840, the program concept had already been introduced by Charles Babbage (1792-1871), as part of a machine to calculate the “analytical” ; he intended to achieve it, but this machine was never built. John von Neumann introduced a refinement important to write in the same form and instructions for processing data and the data themselves. Instructions and data were well handled in the same way by the machine: opening the way for the modern computer.
The transistor was invented in 1948 by John Bardeen, Walter Brattain and William Schockley, three researchers at Bell Laboratories. At that time, the only known way to amplify an electric current was the triode valve invented in 1906 by Lee de Forest. The triode valve had allowed the development of telephony and radio. It was the major component of all electronic circuits. However, the triode was a big flaw: the heating filament which consumed much energy and whose fragility reduces life to a few hundred hours. Systems could not be more than one hundred valves if the reliability was becoming intolerable. By comparison, the transistor consumes 1 / 10 000 000 th the energy needed for the triode with a lifespan of practically unlimited.
The synergy between a new component and a new application has caused explosive growth of the two. Indeed, the digital systems require a very high number of components: a simple calculator needs 100 times more transistors than a television. The Pentium microprocessor that will serve as an example in this document has 7 million transistors. Memory for the data contains several hundreds of millions. With so many components, the key problem to solve was that of the number and cost of connections.
The Cost of Connection
Since the early 60s, the strategy of engineers has been very simple: put the number of components and connections in an integrated circuit to reduce costs. In 1995, it is known to 7 million transistors in a Pentium which represents about 18 million connections. By traditional methods, it would have taken 40 years to a cable operator to carry the 18 million connections. By miniaturization and integration into a single integrated circuit, this is achieved for a few hundred pieces.
The size of an integrated circuit has changed little. The increase in the number of components is achieved mainly by reducing the size of the prints on the circuit (currently about 0.5 micron). This reduction has two consequences on performance and costs:
Performance improves steadily. The maximum operating speed of a transistor depends on the transit time of electrons inside the transistor. More integration increases, the transistor size decreases, performance improves.
The marginal cost of producing a circuit is almost constant (tens dollars). The raw material, silicon is available in abundance everywhere. The unit price of a circuit is determined by the amortization of studies and the manufacturing plant. A consistent performance, the cost of a microprocessor or memory is divided by 10 every 4 years.
For example in the early 1980s, the Cray 1 supercomputer, capable of processing 100 million instructions per second had sold 60 million francs. It required a large computer room and air conditioning equipment. In 1996, A micro-computer of this power, based on Pentium 100, with the same capacity memory is the basic media machine for the general public. The price is about 6000 times less than the Cray 1. This computer runs on a desk, without special precautions.
The Exponential Growth
The number of components per circuit is increased, very regular, some components in the late 50’s to millions today. Since 1964, Gordon Moore, then director of research at Fairchild before creating the Intel Corporation in 1968, was the first to predict that the number of components per circuit would continue to double every two years as was the case during the 5 years. There was no 30 years of significant deviation from this prediction.
Around One Billion Transistors per Circuit?
What are the limits to this integration still further? In fact, what is the maximum number of components that can be integrated in a circuit? To get an idea of this number, simply divide up the surface of an integrated circuit by the minimum area of a transistor.
The surface of the integrated circuit is a crystal of pure silicon. Given a reasonable probability of having no imperfections, the maximum area is about 10 square centimeter. However, this surface can not be filled entirely with transistors. There must be room for connections. Today we see that the average fill is about 10% which gives us a useful 2 square centimeter.
The minimum length of a transistor is about 400 silicon atoms. This figure is calculated by taking into account the proportion of impurities to be incorporated in the silicon crystal. This transistor must be isolated from both sides on the circuit by an equivalent length. The distance between two silicon atoms is 5.4×10 -8. The minimum length of the smallest transistor is therefore approximately 400x3x5.4×10 -8 10 -4 cm square giving a surface of 10 -8 square centimeter.
The maximum obtained by this simplified calculation is about 100 million transistors per circuit. In fact, some refinements are possible which would limit this to around a billion transistors per circuit. This means that the integration process we have known for 35 years will continue for at least 10 years. Every 18 months, all things being equal, the power of microprocessors will double. The real computer revolution is yet to come.
And Data Storage?
The powerful computers of tomorrow will be useful only insofar as they have at their disposal data volumes larger and more quickly accessible. The evolution of mass storage is as important as that of CPUs.
On magnetic media, the cost of storing 10 000 characters rose from 150,000 dollars in 1955 to 100 dollars in 1980 and to 1 cent in 1995. As for the technology components, technology for storage of information in magnetic form has made fantastic progress. The Danish engineer Valdemar Paulsen introduced the first magnetic recording at the Universal Exhibition in Paris in 1900. Since then, the piano wire was replaced by magnetic material coated on strips of plastic or hard drives, but the principle of data recording has remained the same as in the apparatus of Paulsen. An electric current can magnetize a small area of magnetic material, which stores the data.
Density magnetic recording reached several million characters per square centimeter. As for the transistors, reducing the size of the magnetic improves not only the density but also the speed of reading data. A reading speed constant, the number of data read is greater. In contrast, the decrease in the size of the magnetic field strength decreases magnetic read. The limits of magnetic recording will soon be reached.
Fortunately, another technology takes over. Is the optical recording. The energy needed to read no longer stored in the holder. A laser beam reads the data out pushing the limits on the size of stored information. This technology has been used for the compact disc audio introduced in 1983. It immediately got a great success and vinyl LPs disappeared within a few years. The sound is digitized and recorded compact disc so a succession of bits. The ability of this material is 600 million characters with an uncorrected error rate extremely low. This compact disc was quickly used in the computer world as the CD-ROM by analogy with the ROM (Read Only Memory) because it is not rewritable. Still images and especially digital video require much larger capacities. The world’s leading manufacturers have agreed in late 1995 on the future standard CD-ROM that will store 6.4 billion characters representing a capacity ten times greater than for the same surface.
Go To Top of This Article
Smaller, Faster and Cheaper!
The considerable price difference found for “the same” is not only related to the integration of components. It also reflects the amortization of studies on the boxcar made possible by the progressive replacement of systems called “owners” by world standards is called “open systems”. Competition between different manufacturers who provided complete systems incompatible gradually transformed into a competition at the component level on price and performance of interoperable systems.
After 2010?
The improved performance of computers will therefore continue to grow exponentially for about 10 years now. Beyond this period the pace of progress, without change of technology, will compare to other technologies: a few per cent per annum. To improve the price performance ratio significantly, it will find something else.
The first thing that comes to mind is to use light instead of electricity, the photon instead of the electron. A photon computer might work at least 1,000 times faster than a computer to electrons. However, we still do not know amplify light without going through the electron. It remains to invent the “transistor photon” for one day hope to achieve photonic computers. Some devices do exist and can expect a solution but as there is no serious track to industrialization, it is very likely that we will continue beyond the year 2010 with current technology based on the electron.
The Biological Computer is a Dream of Science Fiction
So sooner or later we will need power beyond the capabilities of a single machine. The solution lies in parallel architectures. The conventional computers execute instructions sequentially. Performance in microprocessors, is introduced into a parallel execution of instructions. Unknown to the programmer, the processor performs the calculation of maximum simultaneous treatment according to the sequential nature of the program. In multi-microphones, however, the system consists of a network of identical computers capable of handling different programs in parallel. We know when to use parallel systems to solve the problem lends itself naturally as vector calculus or treatments on the pixels of an image. Apart from these few cases, the use of multiple microphones in legacy applications is not obvious.
Yet we have an excellent model at our disposal: the human brain. The speed of elementary information processing in the human brain is very slow. Information is transmitted by an electrical impulse within the neuron. Between neurons, these are chemical processes that are used. The “processing speed” of the brain is a few hundred neurons per second. However, the brain is able to succeed in less than a second of the extremely complex operations such as face recognition. Today’s computers come with difficulty to the recognition of shapes at the cost of computer processing very long. We’re fairly certain that the brain consists of about 40 billion neurons linked by a network structure, maximum use parallel processing. This analogy can imagine the enormous reserve of power available when we can make better use of parallel architectures.
The Development of Telecommunication
It was not so long ago, the phone was the only means of communication between people. Since the early 70s, computers also exchange data with each other or the terminals. The 7 million Minitel in France, tens of millions of computers worldwide that use hundreds of thousands of servers connected to the Internet have caused an explosion of telecommunications needs. Data exchanges between computers require capacity and transmission speeds much higher than for the phone. It is therefore greatly increase the capacity of telecommunications networks.
After the basic invention of the transistor in 1948, another invention was held in 1958. Is that of the laser. Using laser and fiber optics, the electrons are replaced by photons to carry information bits. Progress has been swift and fiber optics have replaced copper as fast telecommunications technology. The last transatlantic cable wire laid in 1975 allowed the establishment of 10 000 simultaneous calls. The first transatlantic fiber-optic cable laid in 1988 has a capacity of 40,000 simultaneous calls. The development of fiber optic technology is very fast. From 4500 BC to the 60, the transparency of glass had been improved in a report. A window of 10 km thick, made with the best fiber optic would be more transparent than traditional glass window of 1 cm thick.
What is the reserve growth for optical technology? Current technology can even increase by a factor greater than 1000 compared with a factor of 200 microprocessors. It will probably take 25 years to get there, which gives considerable scope for improvement of capacity and flow rates through 2020. In particular, it should be possible to cross the Atlantic in optical fiber without a repeater.
As for electronic circuits, most of the cost of systems is not in the component cost but the costs of interconnection. Engineers increased the number of components per circuit, not as an end in itself but to diminish costs of connections. Similarly the performance of lasers and fiber optics are improved to reduce interconnection costs of which can not be miniaturized.
The Software Deployment
Advances in miniaturization determine the pace of changing technology equipment but it is the speed of deployment of the software that is decisive for the penetration of computers in all industrial or intellectual. The software turns the computer a tool that can theoretically solve a problem into a tool that solves it in practice. The hardware is the software that the instruments are music. Leonardo da Vinci defined the music as “shaping the invisible”. This definition is more suitable to describe the software.
While progress has been withering in the fields of hardware, they have been equally impressive in the software field. 15 years after the definition of architecture by Von Neuman machines in 1945, nearly everything has been invented and programmers already waiting for more powerful machines to progress. Just to be convinced to remember the dates of onset of languages: FORTRAN born 1957, LISP in 1959, COBOL and BASIC in 1960, 1964. This is true for operating systems. The functions offered in 1966 a program developed OS for IBM 360 were virtually the same as those available today under the latest version of OS/390. The first version of UNIX has been developed from 1969 and the relational databases were born in 1970.
The first computer programs were made by mathematicians and scientists who believe that the work was simple and logical. The software has proved harder to develop than they had assumed. Computers were stubborn. They persisted in doing what was written rather than what was intended. The result was a new breed of artisans took over for the job. They were often not mathematicians or scientists, but they were deeply engaged in an adventure with computers, a precursor of a new adventure science.
The Evolution of Programming Languages
The idea of programming languages is as old as the digital computer. Simply program a computer directly to binary code immediately understand why. Early programmers so quickly symbolic notations invented called “language” translated into binary code by programs called “compiler”. Among the first languages, one whose influence has been the greatest is undoubtedly the FORTRAN developed between 1954 and 1957 by John Backus and his colleagues at IBM. He was not sure that in an era where the power of the machines was very limited, a compiler can produce efficient code. This objective was achieved and FORTRAN is still used today. But the original FORTRAN included unnecessary constraints, data structures and limited mostly serious deficiencies in the control logic. In a sense, we can say that all research in the definition of new programming languages were motivated by attempts at correcting defects in FORTRAN.
What languages have been defined by a committee as COBOL, a business organization such as PL / I, PASCAL or as an individual by the Department of Defense as ADA, all attempts to define the universal language failed, leaving the way open thousands of languages of which only a dozen are widespread.
Unlike hardware, advances in software does not come from a single technology or even a dominant technology. For example in the field of languages, progress comes from better control structures of programs, better programming environments, programming tools more powerful. The evolution is slow, but there is progress. After a few years, these advances are no longer seen as improvements, but as new. But what is surprising is that the old techniques do not disappear. Some continue to program in languages that date back 30 years as FORTRAN or even ancient writing known as assembler, while others consider these tools as living fossils.
The User Interface
A program turns the computer into a tool for a particular purpose, such use is in the manufacture of an airplane or writing a document. The user interface is the part of the software that mediates between the user and the program. Once the user interface was the last part to be designed. Now it has become the first. In fact, for the beginner as for experienced user, what he perceives is “his” computer. This illusion is a metaphor that everyone simplified built to explain system actions or initiate new activities. Must note the fundamental work done in 1973 by teams of Xerox Palo Alto Research Center (XPARC) with the invention of the mouse and windows that gave birth to the first commercial products in 1978 as STAR or LISA in 1982. Costly and inefficient, these products have been failures. In 1985, the Apple Mac was the first product to succeed commercially in the proprietary world followed in 1990 by Microsoft Windows 3.0 in the open world of the PC.
Most of the devices and principles developed to improve this metaphor has now become very common in software design. The most important principle was the WYSIWYG (What You See Is What You Get “): the image on the screen is still an accurate representation of the metaphor of the user. Each image manipulation causes a change in the foreseeable state of the system or at least such that the user imagines. The elements of this metaphor that is imposed now are the windows (Windows), menus, icons and a pointing tool. The windows make possible the simultaneous representation on the screen of several activities. The menus allow you to choose the next action. The icons represent data objects as concrete. The pointing device, usually a mouse, to select the windows, menus and icons.
All this has spawned a new generation of interactive software-based metaphors that use this virtual reality to simplify the task of the user and to leverage its ability to simulate the operation of the program without having to resort to abstractions or hidden agendas. The software designer must control the theatrical context is used to create the illusion of reality to improve the ease of use for the user.
Go To Top of This Article
awesome pictures for school projects.