Graphics Card
Graphics Card
Graphics Card
enhance a computer's video memory and display quality, elevating its performance and
enabling high-level tasks like gaming and video editing. It plays a pivotal role in rendering
lifelike visuals on a display device, such as a monitor, by generating a feed of graphics
output.
The GPU, often referred to as the brain of the graphics card, is the primary component
responsible for creating the stunning visuals we see on our screens. It comes in various
models, each with its own level of power and capabilities.
Graphics cards are typically computer expansion cards, housed in the form of printed circuit
boards (expansion boards) that are inserted into expansion slots on the motherboard.
However, some graphics cards may come in dedicated enclosures, connecting to the
computer via docking stations or cables, known as external GPUs (eGPUs).
One of the distinguishing features of a graphics card is its ability to offload graphics
processing tasks from the central processing unit (CPU), thereby enhancing overall system
performance. This allows for smoother gameplay, faster video rendering, and improved
graphical fidelity.
In addition to display output, modern graphics cards can also be utilized for additional
processing tasks, reducing the workload on the CPU. Platforms like OpenCL and CUDA
enable the use of graphics cards for general-purpose computing applications, such as AI
training, cryptocurrency mining, and molecular simulation.
Overall, graphics cards are preferred over integrated graphics for their superior performance
and ability to handle demanding graphical tasks effectively. They are essential components
for anyone seeking to maximize their computer's graphical capabilities and overall
performance.
While integrated graphics may offer advantages in terms of affordability, simplicity, and
energy consumption, they typically deliver lower performance compared to dedicated
graphics cards. This is because the graphics processing unit (GPU) in integrated graphics
shares system resources with the CPU, leading to potential performance limitations.
In contrast, dedicated graphics cards boast separate components such as dedicated random
access memory (RAM), cooling systems, and power regulators. This allows them to offload
graphics processing tasks from the CPU and system RAM, potentially improving overall
system performance, particularly in graphics-intensive applications like gaming, 3D
animation, and video editing.
Both AMD and Intel have introduced CPUs and motherboard chipsets supporting integrated
GPUs. AMD markets CPUs with integrated graphics as Accelerated Processing Units
(APUs), while Intel promotes similar technology under the "Intel Graphics Technology"
branding. These integrated solutions offer a balance between performance and affordability,
catering to a wide range of computing needs.
As graphics cards have advanced in processing power, their hunger for electrical power has
grown accordingly. Today's high-performance models demand substantial energy, with
examples like the GeForce Titan RTX boasting a thermal design power (TDP) of 280 watts,
while the GeForce RTX 2080 Ti Founder's Edition can consume an average of 300 watts
during video game testing. Despite strides in CPU and power supply efficiency, graphics
cards remain among the most power-intensive components in computers.
To meet these demands, modern graphics cards often feature six-pin (75 W) or eight-pin (150
W) power sockets directly connecting to the power supply. However, cooling these power-
hungry cards poses a challenge, especially in systems with multiple graphics cards requiring
power supplies exceeding 750 watts. Effective heat extraction becomes crucial in such setups.
The latest Nvidia GeForce RTX 30 series, using Ampere architecture, has pushed power
draw to new heights. Custom variants like the "Hall of Fame" RTX 3090 have been recorded
peaking at a staggering 630 watts, while standard RTX 3090 models can reach up to 450
watts. Even the RTX 3080 and 3070 draw significant power, up to 350 watts in the former's
case. Founders Edition cards employ a "dual axial flow through" cooler design, efficiently
expelling heat through fans positioned above and below the card.
Graphics cards come in various sizes to accommodate different computer builds. Some are
designated as "low profile," fitting smaller enclosures. These profiles primarily vary in
height, with low-profile cards occupying less than a standard PCIe slot's height. Length and
thickness also differ, with high-end models often spanning two or three expansion slots and
some, like the RTX 4090, exceeding 300mm in length. Opting for a lower profile card is
advisable when space is limited, although larger computer cases like mid-towers or full
towers can mitigate clearance issues.
MULTI-THREAD SCALING
Multicard scaling, a feature offered by some graphics cards, allows users to distribute
graphics processing across multiple cards. This can be achieved through the PCIe bus on the
motherboard or, more commonly, via a data bridge. Typically, the linked cards must be of the
same model, as most low-end cards lack support for this feature. AMD and Nvidia offer their
proprietary scaling methods, with AMD's CrossFireX and Nvidia's SLI (since replaced by
NVLink in the Turing generation). It's important to note that cards from different
manufacturers or architectures cannot be combined for multicard scaling.
When using multiple cards, memory size compatibility is crucial. If cards have different
memory sizes, the system will utilize the lowest value, disregarding higher values. Presently,
consumer-grade setups can utilize up to four cards, necessitating a large motherboard with the
appropriate configuration. Nvidia's GeForce GTX 590 graphics card, for example, can be
configured in a four-card setup. To ensure optimal performance, users should use cards with
similar performance metrics.
Motherboards certified for multicard configurations include models like the ASUS Maximus
3 Extreme and Gigabyte GA EX58 Extreme. Adequate power supply is essential, especially
for four-card setups, which may require a 1000+ watt supply. Effective thermal management
is crucial for powerful graphics cards, requiring well-ventilated chassis and robust cooling
solutions such as air or water cooling. Larger configurations often employ advanced cooling
methods like immersion cooling to prevent thermal throttling.
While SLI and Crossfire setups have become less common due to limited game support and
affordability constraints, they remain prevalent in specialized applications like
supercomputers, workstations for video rendering and 3D rendering, visual effects,
simulations, and AI training. Graphics drivers play a critical role in supporting multicard
setups, with specific driver versions tailored for different operating systems. Additionally,
certain operating systems or software packages offer programming APIs for applications to
perform 3D rendering efficiently.
Parts[edit]
A Radeon HD 7970 with the main heatsink
removed, showing the major components of the card. The large, tilted silver object is the GPU die,
which is surrounded by RAM chips, which are covered in extruded aluminum heatsinks. Power
delivery circuitry is mounted next to the RAM, near the right side of the card.
A modern graphics card consists of a printed circuit board on which the components are
mounted. These include:
Graphics processing unit[edit]
Main article: graphics processing unit
A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a
specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the
building of images in a frame buffer intended for output to a display. Because of the large degree
of programmable computational complexity for such a task, a modern graphics card is also a
computer unto itself.
A heat sink is a vital component found on most modern graphics cards, serving to efficiently
dissipate the heat generated by the graphics processing unit (GPU). This heat sink is designed
to evenly spread out the heat across its surface and throughout the unit itself. Often, a fan is
mounted on the heat sink to facilitate cooling, ensuring that the GPU operates within optimal
temperature ranges. However, not all graphics cards feature heat sinks; some utilize
alternative cooling methods such as liquid cooling systems or water blocks. In the early days
of graphics cards, particularly in the 1980s and early 1990s, heat production was minimal,
and heat sinks were not necessary.
Effective thermal management is essential for modern graphics cards, especially those with
high-performance GPUs. Alongside heat sinks, advanced cooling solutions may incorporate
heat pipes, typically made of copper, to enhance thermal conductivity and heat dissipation.
These thermal solutions help prevent overheating and maintain stable performance during
intensive graphics processing tasks.
The video BIOS, or firmware, plays a crucial role in the operation of a graphics card. It
contains essential programming for initializing and controlling various aspects of the card,
including memory configuration, operating speeds, and voltage settings. However, modern
video BIOSes primarily focus on basic functions like identifying the card and initializing
display modes, while more advanced features such as video scaling and pixel processing are
handled by software drivers.
Graphics cards boast varying memory capacities, ranging from 2 to 24 GB, with some high-
end models offering up to 32 GB. This memory, often referred to as VRAM (Video Random
Access Memory), serves as dedicated storage for screen images, textures, vertex buffers, and
shader programs. Over the years, memory technology has evolved from DDR to advanced
versions like GDDR6, with corresponding increases in memory bandwidth and speed.
Output interfaces[edit]
Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for
CRT displays, also called VGA connector. Today, the VGA analog interface is used for high
definition video resolutions including 1080p and higher. Some problems of this standard
are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA
transmission bandwidth is high enough to support even higher resolution playback, the picture
quality can degrade depending on cable quality and length. The extent of quality difference
depends on the individual's eyesight and the display; when using a DVI or HDMI connection,
especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is
prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image
Constraint Token (ICT) is not enabled on the Blu-ray disc.
Digital Visual Interface (DVI)[edit]
Digital Visual Interface is a digital-based standard designed for displays such as flat-panel
displays (LCDs, plasma screens, wide high-definition television displays) and video projectors.
There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and
electrical noise, corresponding each pixel from the computer to a display pixel, using its native
resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via
simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.
Video-in video-out (VIVO) for S-Video, composite video and component video[edit]
VIVO connector
Main article: Video-in video-out
These connectors are included to allow connection with televisions, DVD players, video
recorders and video game consoles. They often come in two 10-pin mini-DIN
connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-
Video in and out plus composite video in and out), or 6 connectors (S-Video in and
out, component YPBPR out and composite in and out).
High-Definition Multimedia Interface (HDMI)[edit]
HDMI is a compact audio/video interface for transferring uncompressed video data and
compressed/uncompressed digital audio data from an HDMI-compliant device ("the source
device") to a compatible digital audio device, computer monitor, video projector, or digital
television.[52] HDMI is a digital replacement for existing analog video standards. HDMI
supports copy protection through HDCP.
DisplayPort[edit]
DisplayPort
Main article: DisplayPort
1. S-100 bus (1974): Introduced as part of the Altair 8800, the S-100 bus was the first
industry-standard bus for microcomputers.
2. ISA (1981): The Industry Standard Architecture, introduced by IBM, became
dominant in the 1980s. It was an 8- or 16-bit bus clocked at 8 MHz.
3. NuBus (1984): Used in Macintosh II computers, NuBus was a 32-bit bus with an
average bandwidth of 10 to 20 MB/s.
4. MCA (1987): IBM's Micro Channel Architecture, introduced in 1987, was a 32-bit
bus clocked at 10 MHz.
5. EISA (1988): Released to compete with IBM's MCA, the Extended Industry Standard
Architecture was compatible with ISA and operated as a 32-bit bus clocked at 8.33
MHz.
6. VLB (VESA Local Bus) (1992): An extension of ISA, VLB was a 32-bit bus clocked
at 33 MHz, providing faster data transfer.
7. PCI (Peripheral Component Interconnect) (1993): Replaced earlier bus standards,
including EISA, ISA, MCA, and VLB. PCI offered dynamic connectivity between
devices and operated as a 32-bit bus clocked at 33 MHz.
8. UPA (UltraPort Architecture) (1995): Introduced by Sun Microsystems, UPA was a
64-bit bus clocked at 67 or 83 MHz.
9. USB (Universal Serial Bus) (1996): Initially used for miscellaneous devices, USB
saw the introduction of USB displays and display adapters.
10. AGP (Accelerated Graphics Port) (1997): Dedicated to graphics, AGP was a 32-bit
bus clocked at 66 MHz, providing faster data transfer for graphics-intensive
applications.
11. PCI-X (PCI eXtended) (1998): An extension of the PCI bus, PCI-X increased the
bus width to 64 bits and clock frequency to up to 133 MHz, offering enhanced
performance.
12. PCI Express (PCIe) (2004): A point-to-point interface, PCIe offered significantly
faster data transfer rates compared to AGP. It became the standard for modern
graphics cards, providing improved performance and scalability.