If the motherboard represents the skeleton of each computer, the CPU is the brain. Here’s how the processor and how it has evolved over the past 70 years.
In computer science, with Von Neumann architecture it refers to the theoretical description of the operation and composition of programmable electronic computers. Theorized by the Hungarian-born mathematician John von Neumann in the 40s of XX century, this particular type of hardware architecture is the foundation for all of today’s computer systems for its simplicity of implementation and operation.
A von Neumann machine (that is, a device made according to the principles of von Neumann architecture) is composed of three basic components: a central working unit ( Central Processing Unit or CPU in English – see http://www.abbreviationfinder.org/acronyms/cpu.html), a working memory ( RAM ) in the to archive and store data to be processed and those processed and peripheral input and output. To connect these three components of the system bus, connecting elements Members to carry data between the one and the other part. The heart of this architecture is the processor or CPU, which has the task of carrying out the logical and mathematical operations that convert the incoming data from input devices and return them, modified, through the output devices.
The history of the CPU
Since von Neumann theorized the hardware architecture that bears his name, things have changed a lot, even for the structure and the manufacture of processors. The days of EDVAC, one of the first electronic digital computer, the processors were designed and manufactured specifically for each device. It was, in most cases, CPU intended to achieve only one type of operation or function, and for this there was a need to design ad hoc.
With the development of the mass production of transistors and with the subsequent evolution of ICs ( Integrated Circuit, IC), it was possible to realize the microchip in which the computing power increased in a manner inversely proportional to the size: the smaller the chip, the greater the ability to perform logical operations.
The era of transistors began in the mid 50s, when the first electric circuits “printed”came to replace the more fragile and less stable vacuum tubes (or vacuum tubes). In this way it was possible to realize faster processors, more reliable and, above all, more powerful. By eliminating the need to rely on fragile items and substantially “incompressible” such as valves, it was also possible to reduce (much) the physical space that central units occupied within the architecture of the computer.
The basis of these new CPUs were the printed circuits, that of the plastic card made in a standard way and in series. In this period the technological advances made it possible to achieve smaller and smaller integrated circuits and more and more “crowded”: on a same circuit were housed more transistors thus giving life to the chip. This new production technology, thus significantly increasing the calculation speed of the computer, in addition to ensuring a considerable energy saving.
With the arrival of the new decade, also came a new production technology that enabled him to make a further step forward in computing. Thanks to the intuition of the Italian physicist Federico Faggin, starting in the 70s were developed based processors on a single chip the size rather small. Thus began the era of the microchip, small plastic cards that housed, in a few square centimeters, millions of transistors and with unprecedented computing power before. The reduced dimensions also allow to further increase the operating frequencies of processors (now now microprocessors), given that the space between a transistor and the other was physically decreased.
Today’s technological developments have allowed to reduce a lot the space between a transistor and the other. Intel and AMD, the two main manufacturers of microchips and global processors, today adopt a manufacturing process which provides for printed circuits with gate (or “activation terminals”) of the transistor large 14 nanometers (a nanometer is one billionth of a meter). But miniaturization techniques seem to have now reached their limits and many analysts think that only the use of different materials from silicon could lead to further evolution, allowing us to fulfill the dictates of Moore’s Law.
What is a processor
The primary task of a CPU is to run a series of program usually calls instructions. In order to achieve this, each processor (and this has remained unchanged from the 40s to now) makes use of two of its key components: the ALU (arithmetic logic unit, arithmetic logic unit) and CU (control unit , unit control). The first being in charge of the logical-mathematical operations that convert the incoming data from input devices, while the second has the task to control and coordinate the necessary actions for the execution of operations.
The duty cycle of a processor is usually divided into four phases: acquisition of information (fetch), decoding (decode), execution (execute) and rewriting (write back).
The first step is to go to retrieve the information necessary to perform the logical operation. The information is saved in memory and can be identified thanks to a logical address specified in a special register, the Program Counter.
Once the information has been obtained, there is a need to make them “digest” to the processor. The data, in fact, have to be broken into meaningful units for the CPU, so that it can carry out the instructions required by the program.
The third phase is that purely operational. After the data were broken down and rendered “digested” the arithmetic-logic unit of the processor, these are transformed according to the operations that each program sends to the processor itself.
In the fourth and final phase, that of the rewrite, the data thus modified are rewritten in a random portion of memory and made available to the program that had “made the request.”
The cyclo voro just described also determines the speed of the processor and constitutes, essentially, the unit of measurement of performance. The greater the number of cycles that the processor is able to complete the unit of time, the greater will be the working frequency (measured in hertz). Today the frequencies of the processor are around 3.2 gigahertz (GHz), but to get an idea of the overall computing power, it should be borne in mind that every single unit of work contains within it two or more processors (multicore).
3d black cpu — Stock Photo © imagerymajestic #1662795
MSI GS70 6QE Stealth Pro Notebook Review
Black Tv Stand Media Entertainment Center 42 50 60 Inch …
IMBA-9454G Industrial ATX Motherboard with Six PCI Slots …
USB Studio Condenser Recording Microphone Mic with Stand …
Lenovo ThinkPad X1 Yoga 2-in-1 Review
Netduma R1 Gaming Router Review
Intel Core i5-750 and Core i7-870 Processors
Neon brain. Cpu. Circuit board. Vector illustration. Eps …
Super Computer — Stock Photo © nmedia #4359827
Review of HP Compaq 6005 Pro Ultraslim PC
Lian Li DK-04 Sit / Stand Computer Desk Enclosure
Computer repair — Stock Photo © HASLOO #5000051
iPad mini with Retina Display Vs. Kindle Fire HDX 7 Tech …
Wallpapers Collection: lindsay lohan machete movie
iPad mini with Retina Display Vs. Kindle Fire HDX 7 Tech …
Process Capability and Non-Normal Data
What is Parallels Virtuozzo Power Panel?
PROGRAMMABLE LOGIC CONTROLLER