What is a Processor (CPU)?

Computers or all intelligent electronic devices have, of course, become the cornerstone of our daily lives. At the heart of every computer is the Central Processing Unit (CPU). We know it as both a CPU and a processor or microprocessor.

The CPU executes instructions and processes data in computer programs. In this article, I will tell you what a processor is and how it works. I will start with large, custom-made computers. Then, I will go all the way to today’s electronic devices.

Processor (CPU) Definition and Features

What is a Processor, and What Does It Do?

The abbreviation of CPU is Central Processing Unit. This hardware is usually known as a processor or microprocessor.

Actually, the definition of the processor is simple; it interprets the instructions in the computer and processes the data. Thus, it understands and executes the instructions in PC programs.

CPUs provide the programmability feature of digital computers. It is also one of the essential components of computers, as well as primary storage and I/O devices. In other words, the microprocessor is a CPU produced with integrated circuits.

However, since the mid-1970s, single-chip microprocessors have become widespread. As a result, microprocessors have replaced almost all types of CPUs. Today, we usually refer to the term “CPU” as microprocessors.

History

The central processing unit (CPU) runs complex computer programs. This term can also be applied to computers before the term “CPU” became common. However, “CPU” and its abbreviation have been in use since the early 1960s. The shape, design, and implementation of CPUs have changed over time. Nevertheless, the basic operating principles remain similar.

The first CPUs were explicitly designed to be part of large computers. However, this expensive method has given way to inexpensive, standard processors. This change began in the era of discrete transistors, mainframes, and microcomputers. The widespread use of integrated circuits (ICs) accelerated the process.

The miniaturization and standardization of CPUs have expanded the range of devices used. Modern microprocessors are found in automobiles, televisions, refrigerators, and cell phones. Almost all CPUs deal with discrete states and need a particular class of switching.

Electrical relays and vacuum tubes were used before transistors. The elements were unreliable and had additional hardware. Vacuum tubes were more reliable, but they had to be fully heated. When a tube failed, the CPU had to be replaced. As a result, early electronic computers were generally less reliable.

However, tube-based processors became dominant due to their speed advantage. CPUs operated at low clock frequencies. Clock signal frequencies ranged from 100 kHz to 4 MHz and were limited by the speed of the switching devices.

Transistor Processor

Processor design became more complex as more minor, more reliable electronic devices were developed. The first of these improvements came with the advent of the transistor. In the 1950s and 1960s, transistorized processors did not have bulky components. Transistors made it possible to create more complex and reliable CPUs.

They were built on printed circuit boards. Integrated circuit (IC) technology allows many transistors to be fabricated on a single chip. Initially, only basic digital circuits were miniaturized in ICs. These circuits were called “small-scale integration” (SSI).

The Apollo Guidance Computer used SSI integrated circuits. A full processor required thousands of chips, but it saved space and power. As microelectronics progressed, the number of transistors in ICs increased. That is, it reduced the number of individual ICs. MSI and LSI integrated circuits increased the number of transistors by hundreds and thousands.

In 1964, IBM introduced the System/360 computer architecture. This architecture could run programs at different speeds and performances. IBM used the concept of Microcode to facilitate development. The System/360 architecture was the market leader for many years and is used in modern computers.

In 1964, DEC introduced the PDP-8 computer for the scientific market. Later, the PDP-11 line was implemented with LSI components. The first LSI implementation of the PDP-11 included four LSI circuits.

Transistor-based computers ran at higher speeds. They provided increased reliability and lower power consumption. But, it enabled the CPU to operate at high speed because it had a short switching time. Megahertz clock speeds were achieved. SIMD designs emerged. This led to the emergence of supercomputers such as Cray Inc.

Microprocessors

The first microprocessor was the Intel 4004 in 1970. The first widespread microprocessor was the Intel 8080, which was released in 1974. These processors changed the implementation methods of the central processing unit.

At that time, manufacturers of mainframes and minicomputers upgraded their old architectures. Later, they produced microprocessors with proprietary IC development programs. These processors were compatible with old hardware and software.

The emergence and success of personal computers connected the term CPU to microprocessors. Previous generation CPUs were implemented with discrete components and many circuit boards. Thus, Microprocessors are made with a small number of ICs. Moreover, they usually use only one IC.

Smaller processor size allows for faster switching times. This is possible due to physical factors. Synchronous microprocessors have clock times between tens of megahertz and several gigahertz.

Also, the ability to build tiny transistors on the IC increased. This increased the complexity and transistor count in CPUs. As we know, Moore’s Law explains this growth in processor complexity.

CPU complexity, size, and structure have changed over the last 60 years. However, the basic design and operation have not changed. Clearly, today’s processors can be described as von Neumann stored program machines.

While Moore’s Law remains valid, the limits of transistor technology have raised concerns. Thus, the miniaturization of electronic gates has caused problems such as electromigration.

These new concerns have led researchers to examine new methods. It has opened the door to research methods such as quantum computing and the use of parallelism. These methods aim to increase the usefulness of the von Neumann model.

How Does the Processor (CPU) Work?

Most processors execute instructions called programs. So, a program represents a number stored in the computer’s memory. Von Neumann Architecture processors use four steps: read, decode, execute, and write.

The first step is to read and retrieve an instruction from memory. The program counter determines the current location. This counter then tells the processor where it is in the program.

After reading the instructions, the program counter is incremented. It retrieves instructions from slow memory. In this case, the processor has to wait. However, newer types of processors quickly solve this problem with caches and pipelining.

In the decode stage, the instruction is broken down into its parts. The processor interprets these parts. On the other hand, the ISA (Instruction Set Architecture) then determines how the instructions are interpreted.

The opcode indicates what operation is to be performed. The other numbers define the operands. So, the operands can be a fixed value. They can also be a register or a memory address.

In older designs, hardware was used to decode the instructions. Modern CPUs use microprograms. This microprogram generates the processor configuration signals.

The microprogram can sometimes be modified. Thus, the decoding method can be changed. After the read and decode steps, the execution step comes. In this step, the processor units are connected.

For example, ALU is used in an additional operation. The inputs provide the numbers to be added. The outputs contain the sum. As a result, the ALU performs simple arithmetic and logic operations.

If the addition operation overflows, it sets the overflow flag. In the final step, the writing step, the results are written to memory. Plus, it writes the results to the processor register or main memory.

Some instructions change the program counter. These instructions are called jumps. Jumps facilitate loops and conditional execution.

Some instructions change the flags. Flags affect the behavior of the program. For example, the compare instruction compares two values. The flag indicates which value is more significant. This flag affects the following instructions.

After executing the instructions and writing the data, the processing cycle starts again. First, it increments the program counter and reads the next instruction. As a result, if the instruction is a jump, it changes the program counter.

More complex CPUs process multiple instructions simultaneously. In effect, this describes the classic RISC Pipeline. This structure is typical in microcontrollers.

Design and Implementation in Processors

1) Clock Frequency

Most processors and sequential logic devices are synchronous. In other words, they are designed and operate around a synchronization signal. This signal is usually a periodic square wave.

Designers determine the clock signal by calculating the maximum travel time of the electrical signals. This period should be longer than the time it takes for the signal to propagate in the worst case. It is possible to set the clock period above the propagation delay.

It simplifies the design of the CPU device and the number of components. However, even though some units are faster, the entire processor must wait for the slower elements. This limitation has been compensated for by various methods to increase processor parallelism.

However, architectural improvements do not cut all the disadvantages. For example, the clock signal is subject to electrical signal delays. In more complex processor hardware, high clock speeds make it difficult to keep the signal in phase (synchronized).

This requires multiple identical clock signals. This is a common practice in modern processor hardware. As the clock speed increases, the heat dissipated by the CPU component also increases. The clock signal is constantly changing, affecting the use of the components.

Energy consumption increases, which in turn affects heat dissipation. Therefore, effective cooling solutions are required at high clock speeds. The method of shutting off the clock signal to unnecessary components is known as Clock gating. Thus, this is usually difficult and is used in low-power designs.

Another method is to end the global clock signal. This complicates the design process. Yet, asynchronous designs offer advantages in power consumption and heat dissipation.

Although rare, some processors do not use a global clock signal. AMULET using ARM architecture and MiniMIPS compatible with MIPS R3000 are examples of this. Instead of removing the clock signal, some designs use asynchronous ALUs.

Asynchronous designs can perform well compared to synchronous designs. These designs offer excellent power consumption and heat dissipation properties. Thus, they are suitable for embedded computers.

2) Parallelism

This design causes inefficiency in subscale processors. Because it only executes one instruction. The processor cannot move on to the next instruction until it completes this instruction.

As a result, subscale processors need many clock cycles. That is, adding a second execution unit is usually not beneficial. Performance is not significantly improved. So, two paths freeze, and unused transistors increase.

This processor design can only achieve scalar performance. Performance almost always remains subscale. Yet, there are design methods that allow the processor to run more in parallel.

We can classify these design methods with two main terms:

  1. ILP (Instruction Level Parallelism): It aims to increase the speed of execution of instructions within the processor. That is, it increases the use of execution resources.
  2. TLP (Thread Level Parallelism): It increases the number of threads that the processor can execute simultaneously.

Each method differs in terms of application and processor performance. These differences vary depending on the techniques used.

Vector Processors

A less common but important type of CPU is related to vectors. We can also refer to the processors discussed above as scalar devices.

Instead of processing a single piece of data for each instruction, vector processors handle multiple pieces of data. These types of processors are commonly called SISD (Single Instruction, Single Data) and SIMD (Single Instruction, Multiple Data).

SISD works with a single instruction and a single set of data. SIMD deals with multiple data with a single instruction. Vector processors can perform the same operation on large data sets quickly. Therefore, they are helpful for tasks that require optimization.

For example, multimedia applications and scientific computing benefit from vector processors. A scalar processor processes each data instruction individually. But, a vector processor processes large data sets with a single instruction.

The first vector CPUs were generally used in cryptography and scientific research. Later, multimedia moved to digital environments, and the need for the SIMD form increased.

After the addition of floating-point units to general-purpose processors, SIMD execution units emerged. Early SIMD specifications, such as Intel’s MMX, were limited to integers only.

This created a barrier for software developers, as most applications worked with floating point numbers. Thus, early designs were eventually refined and evolved into modern SIMD specifications. Notable examples include Intel SSE and PowerPC’s AltiVec.

Conclusion

In conclusion, the evolution of the CPU has been quite remarkable. From the early days of bulky and unreliable components to the modern era of integrated circuits, this transition has spanned from vacuum tubes to transistors and microelectronic designs.

This evolution has increased the speed and reliability of CPUs. It has also led to their widespread presence in everyday devices.

As technology continues to evolve, the future of processor design looks promising. In the future, smaller, more powerful, and more energy-efficient CPUs will continue to shape our digital world!

Add a Comment

Your email address will not be published. Required fields are marked *