Unlike a desktop computer based on transistors working on binary data, the quantum computers work on qubits whose quantum state can have an infinite number of values. Small quantum computers were built from the 1990s. Until 2008, the major difficulty concerns the physical realization of the basic element: the qubit. The phenomenon of decoherence (loss of quantum effects on the macroscopic scale) hinders the development of quantum computers. The first quantum processor is created in 2009 at Yale University : it has two qubits each composed of one billion aluminum atoms placed on a superconducting support. This domain is financially supported by several organizations, companies or governments because of the importance of the issue: at least one algorithm designed to use a quantum circuit, the Shor algorithm, would make many combinatorial calculations possible b out of range a classic computer in the current state of knowledge. The possibility of breaking the classical cryptographic methods is often put forward.
Quantum computers take advantage of the strange ability of subatomic particles to exist in more than one state at a time. Thanks to the way these particles behave, operations can be performed faster than on conventional computers, while consuming less energy.
In fact, unlike the bits of classical computers that can only exist in two states (1 or 0), the quantum bits (qubits) of quantum computers can exist in any superposition of these two values and thus store more data. information.
IBM has created quantum algorithms to perform machine learning on quantum computers to create artificial intelligences much more powerful than those created with conventional computers. This is circulating with news.
Feature Mapping is the process of disassembling information to access more finer aspects of that data. Currently, Machine Learning already allows to do this, for example by taking the pixels of an image to place them in a grid according to their color. The algorithms then map the color values non-linearly and break down the data according to their most useful characteristics.
IBM researchers have discovered a way to make Machine Learning significantly more efficient for feature mapping. In a paper published, the research team announced a “quantum algorithm” allowing quantum computers to perform machine learning in a new scale.
Thus, IBM’s new quantum algorithms make it possible to separate the aspects and characteristics of the data in an even greater degree than with a standard Machine Learning algorithm. In fact, data can be classified more precisely and Machine Learning systems will be more efficient. The goal is to use quantum computers to create new classifiers that can generate more sophisticated data maps. In doing so, researchers will be able to develop more effective artificial intelligences that can for example identify invisible patterns for conventional computers.
For now, IBM says that these new algorithms have not yet surpassed the performance of conventional machines on quantum computers. However, this is mainly related to the fact that quantum computers are still limited by the current hardware constraints.
Indeed, current quantum computers have a computing capacity limited to only two qubits. However, this computing capacity can be simulated on conventional computers. It will therefore be necessary to wait for more efficient quantum computers to emerge for IBM algorithms to achieve “quantum advantage”. In the meantime, these new algorithms are available in open-source for the developers, researchers and other experts.