The Video Card or Graphics Card uses support processors to process information as quickly and efficiently as possible. Plus, it also uses memory chips to store images temporarily.
What is a Video Card?
The fixed-function graphics accelerator chip family from S3 Corporation is 86C801, 86C805, 86C924 and 86C928 chips. It is used in most accelerated graphics adapters used to accelerate Microsoft Windows video response.
Modern video cards can offer additional features such as tuning television signals, the presence of connectors for a stylus, video recording, and decoding of different formats.
A graphics card usually has two main characteristics: the image resolution and the number of colors it can display simultaneously.
Both characteristics determine whether the user will enjoy certain video games or will be able to use software that requires a lot of graphics capacity, such as design programs.
History of Graphics Cards
The history of graphics cards began in the late 1960s when you moved from using printers as a display element to using monitors.
The first cards could only display text at 40×25 or 80×25. However, the advent of the first graphics chips, such as the Motorola 6845, allowed the start of offering equipment based on the S-100 or Eurocard bus with graphics capabilities. With the addition of a television modulator, the term graphics card was first used.
The success of the home computer and the first video game consoles meant that these chips were integrated into the motherboard due to lower costs.
Even computers that already came with a graphics chip included 80-column cards that added an 80×24 or 80×25 character text mode, mainly for running soft CP/M.
The evolution of graphics cards took a significant turn in 1995 with the advent of the first 2D/3D cards produced by Matrox, Creative, S3, and ATI. These cards met the SVGA standard but had 3D functionality. In 1997, 3dfx launched the Voodoo graphics chip with new 3D effects as well as new computing power.
From this point on, a series of graphics cards followed, such as the Voodoo2 from 3dfx and the TNT and TNT2 from NVIDIA. The power these cards achieved was due to the fact that the PCI port they were connected to was short on bandwidth. Intel developed the AGP (Accelerated Graphics Port) to eliminate the bottlenecks that began to appear between the processor and the card.
From 1999 to 2002, NVIDIA dominated the graphics card market with its GeForce product line. During this period, developments were directed towards the 3D algorithm area and the speed of graphics processors.
However, there was also a need to increase the speed of the memories. Therefore, DDR memories were included in graphics cards. At that time, video memory capacities moved from 32 MB GeForce to 64 and 128 MB GeForce 4.
Graphics Card Types
1) MDA Card
IBM published the Monochrome Display Adapter or Monochrome Adapter as 4KB memory.
It was specific to TTL monitors. It had no graphics, and its only resolution was 14×9 dots in text mode (80×25) without any configurable options.
Basically, this card uses the video driver to read the dot matrix that is displayed from the ROM. It is sent to the monitor as serial information. The lack of graphics processing should not be surprising since there were no applications on these early PCs that could really take advantage of a sound video system. In practice, everything was limited to text-mode information.
This type of card is quickly identified as it includes a communication port for the printer.
2) CGA Card
Depending on the text used, the “Color Graphics Array” or “Color Graphics Adapter” was also introduced by IBM in 1981 and was very common. It allowed 8×8 dot character arrays on 25-line, 80-column displays. However, it used only 7×7 dots to represent characters.
This detail made it impossible to represent underscores. So, it replaced them with different densities regarding the character in question. In graphics mode, it allowed resolutions up to 640×200. The memory was 16 KB, and it was only compatible with RGB and Composite monitors.
Although superior to MDA, many users preferred the latter because the distance between the points of the potential grid on CGA monitors was greater.
Thus, it was possible to achieve 8 colors, each with two densities, for a total of 16 different tones that could not be repeated at all resolutions.
This card had a fairly common failure and was known as snow. This issue was random and resulted in bright, intermittent spots that distorted the image on the screen, so much so that some BIOSes of the time included a No Snow clearing option.
3) HGC Card
The Hercules Graphics Card is more popularly known as Hercules. It was released in 1982 and became a hugely successful standard despite not supporting IBM’s BIOS routines. Its resolution was 720×348 dots in monochrome with 64 KB of memory.
Since there was no color, the only task in memory was to reference every dot on the screen, using 30.58 KB for graphics mode and the rest for text mode and other functions. Values were made at a frequency of 50 HZ, driven by the 6845 video controller. Characters were drawn in 14×9 dot matrices.
Graphics Card Components
1) GPU
GPU, which stands for graphics processing unit, is a processor dedicated to graphics processing. Its purpose is to relieve the workload of the central processor. Therefore, it is optimized for floating-point computation, which is dominant in 3D functions. Most of the information presented in the specifications of a graphics card refers to the characteristics of the GPU, as it is the most essential part of the card.
Three of the most important of these characteristics are the core clock frequency, which ranges from 500 MHz on low-end cards to 850 MHz on high-end cards in 2010, the number of shader processors, and the number of pipelines. The 3D image, made up of vertices and lines, is transformed into a 2D image made up of pixels.
2) Graphics RAM Memory
Depending on whether the graphics card is integrated into the motherboard or not, it uses the computer’s RAM. That is, it has a dedicated memory card. This memory is video memory or VRAM.
In 2010, the memory used was based on DDR technology, emphasizing GDDR2, GDDR3, GDDR4 and GDDR5, especially GDDR2, GDDR3 and GDDR5. The memory clock frequency was between 400 MHz and 4.5 GHz.
3) RAMDAC
The RAMDAC is responsible for converting digital signals generated by the computer into an analog signal that the monitor can interpret.
Depending on the number of bits it processes simultaneously and the speed at which it processes them, the converter will be able to support different monitor refresh rates. Given the increasing popularity of digital monitors, the RAMDAC is becoming obsolete because analog conversion is not necessary. However, many indeed retain their VGA connections for compatibility.
Connection Ports on Motherboard
In chronological order, the connection systems between the video card and the motherboard are basically:
- MSX Socket: 8-bit data bus used in MSX equipment.
- ISA: 16-bit 8 MHz data bus architecture was dominant in the 1980s; it was created for IBM PCs in 1981.
- Fox II: Used in Commodore Amiga 2000 and Commodore Amiga 1500.
- Fox III: Used in Commodore Amiga 3000 and Commodore Amiga 4000.
- NuBus: Used in Apple Macintosh.
- Processor Direct Socket: Used in Apple Macintosh.
- MCA: Attempted to replace ISA by IBM in 1987. It was 32-bit and had a speed of 10 MHz, but was not compatible with its predecessors.
- EISA: 1988 IBM competition response; 32-bit, 8.33 MHz, and backward compatible.
- VESA: An extension of ISA that solved the 16-bit limitation, doubled the size of the bus, and had a speed of 33 MHz.
- PCI: A bus that replaced the previous ones from 1993; with a size of 32 bits and a speed of 33 MHz, it allowed a dynamic configuration of connected devices without the need to set jumpers manually. PCI-X was a version that increased the size of the bus to 64 bits and increased its speed to 133 MHz.
- AGP: A dedicated bus, 32 bits as PCI; the first version in 1997 increased the speed to 66 MHz.
- PCIe: A serial interface that started competing with AGP in 2004 and doubled its bandwidth in 2006. PCI-X should not be confused with the PCI version.
What are the Connection Systems?
- DA-15 RGB Connector: Mostly used on Apple Macintosh.
- Digital TTL DE-9: Used by primitive IBM cards (MDA, CGA and variants, EGA, and very few VGA).
- SVGA/Dsub-15: Analog standard since the 1990s. Designed for CRT devices, it eliminates electrical noise, digital-to-analog conversion, and sampling errors when evaluating pixels to be sent to the monitor.
- DVI: Designed to achieve maximum image quality on digital displays such as LCD or projectors. It avoids distortion and noise by directly matching a pixel to be represented by one of the monitors at its native resolution.
- S-Video: Includes supporting televisions, DVD players, VCRs, and game consoles.
- Composite Video: Quite old and comparable to Scart. Very low-resolution analog using RCA connector.
- Component Video: It is also used for projectors and has three pins (Y, Cb, and Cr) of comparable quality to SVGA.
- HDMI: Encrypted digital audio and video technology compressed in the same cable.
- Display Port: A graphics card port created by VESA and rivaling HDMI, it transfers high-definition video and audio. Its advantages are that it is free of patents and therefore does not have the copyright to include it in devices, and it also has some tabs to prevent accidental removal of the cable.
Cooling Devices in Graphics Cards
Graphics cards reach very high temperatures due to the workloads they are exposed to. If not taken into account, the heat generated can cause the device to malfunction, block, or even break down. To prevent this, cooling devices are included to remove excess heat from the card.
- Heatsink: Made of heat-conducting material and fixed to the card. Efficiency is a function of the structure and total surface area, so they are pretty bulky.
- Fan: Removes heat from the card by moving nearby air. It is less efficient than a cooler, which has moving parts and produces noise.
Power Supply
Until now, the power supply of graphics cards has not been a big problem. However, the current trend of new cards is to consume more and more energy. Although power supplies are becoming more powerful every day, the bottleneck is found in the PCIe port. And it can only provide 150 W of power.
Therefore, graphics cards with higher consumption than what can be provided by PCIe have a connector that allows a direct connection between the power supply and the card without having to go through the motherboard and, therefore, the PCIe port.
Nevertheless, it is expected that graphics cards may not need their power supply for a long time, turning this set into external devices.
Difference Between GPU and Graphics Card
The GPU should not be confused with the graphics card because not all GPUs produced are integrated into graphics cards.
You can think of a graphics card as a structure developed specifically and compatible with the PC. Graphics cards used in computers have a unique structure. GPUs, on the other hand, are used externally by computers, in game consoles, or embedded in processors.
In addition, do not confuse the GPU manufacturer with the brands that develop and market the card. For example, the largest graphics chip manufacturers on the market right now are NVIDIA and AMD. This is because they are only responsible for making graphics chips (GPUs).