Graphics Card

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

A graphics card, also known as a GPU or Graphics Processing Unit, is hardware designed to

enhance a computer's video memory and display quality, elevating its performance and
enabling high-level tasks like gaming and video editing. It plays a pivotal role in rendering
lifelike visuals on a display device, such as a monitor, by generating a feed of graphics
output.

The GPU, often referred to as the brain of the graphics card, is the primary component
responsible for creating the stunning visuals we see on our screens. It comes in various
models, each with its own level of power and capabilities.

Graphics cards are typically computer expansion cards, housed in the form of printed circuit
boards (expansion boards) that are inserted into expansion slots on the motherboard.
However, some graphics cards may come in dedicated enclosures, connecting to the
computer via docking stations or cables, known as external GPUs (eGPUs).

One of the distinguishing features of a graphics card is its ability to offload graphics
processing tasks from the central processing unit (CPU), thereby enhancing overall system
performance. This allows for smoother gameplay, faster video rendering, and improved
graphical fidelity.

In addition to display output, modern graphics cards can also be utilized for additional
processing tasks, reducing the workload on the CPU. Platforms like OpenCL and CUDA
enable the use of graphics cards for general-purpose computing applications, such as AI
training, cryptocurrency mining, and molecular simulation.

Overall, graphics cards are preferred over integrated graphics for their superior performance
and ability to handle demanding graphical tasks effectively. They are essential components
for anyone seeking to maximize their computer's graphical capabilities and overall
performance.

Types of Graphics Card


1. Integrated – The graphics which are built into the
motherboard are known as Integrated, are generally used
in most laptops, the cannot be easily upgraded.
2. Discrete – It is an external graphics card which is a
hardware and added on a motherboard as an extra
component. Most people may not need an external
graphics card for their work on PC. Basic work like creating
files, doing office work, watching movies, listing songs, etc
may not need a graphics card. But for the users playing
high resolutions games and video editing may need an
external component i.e graphics card for their purpose.
Components of Graphics Card
 GPU: Basically like a CPU, the GPU is a real piece of
hardware in a system.
 Memory: It also referred to as VRAM, the graphics card
has memory allocated specifically to support the initial
operations.
 Interface: PCI Express, located at the card’s bottom as per
requirement, is used by the majority of GPUs in the system.
 Heat Sink: To aid in the initial dissipating heat
accumulation during use, every GPU has a heat sink and
fans as well.
 Power Connectors: Six- or eight-pin power connectors
are very much necessary for modern GPUs to process;
sometimes in the special cases, two or three are needed.
 Outputs: A variety of useful video outputs are available in
the system, frequently including HDMI, DisplayPort, DVI,
or VGA as per requirement.
 BIOS: The graphics card when the user turn off the system
or computer, the BIOS saves the required data on voltages,
memory, and other components. It also contains the initial
basic setup and program information to process well.
Features of Graphics Card
 Memory: Graphics card carries its own memory. Memory
range could be from 128MB to 2GB of memory. We should
buy a card with more memory. More RAM equals higher
resolutions, more colors on the screen, and the best special
effects.
 Multiple Screen support: Most new video cards have the
ability to connect two monitors to one card. This feature is
very important for video editing and hardcore gamer
craves that extra real estate as well. You can either see
two separate Desktops or make the two monitors into one
Desktop.
 Gaming And Video Editing: The discrete graphics card is
not only for a gamer but those who use high-end video
editing software also get help as a high-quality graphics
card to reduce the rendering time of an image also give a
high-def environment.
 Connection – The graphic card is connected to the
monitor using many different ports put the port must be
present on both monitor and Graphics card. These are
some common ports used to connect graphics card with a
monitor.
 VGA
 HDMI
 DVI
 Some motherboards have more than 1 expansion slot so
we can add more than one graphics card to make
performance better. Many laptops nowadays come with an
integrated graphics card in them.
Manufacturers of Graphics Card
The two main manufacturers of discrete graphics card are:
 NVIDIA
 AMD

Integrated graphics serve as a viable alternative to dedicated graphics cards, with


implementations integrated into the motherboard, CPU, or system-on-chip (SoC).
Motherboard-based setups are often referred to as "on-board video", offering a cost-effective,
compact, and energy-efficient solution.

While integrated graphics may offer advantages in terms of affordability, simplicity, and
energy consumption, they typically deliver lower performance compared to dedicated
graphics cards. This is because the graphics processing unit (GPU) in integrated graphics
shares system resources with the CPU, leading to potential performance limitations.

In contrast, dedicated graphics cards boast separate components such as dedicated random
access memory (RAM), cooling systems, and power regulators. This allows them to offload
graphics processing tasks from the CPU and system RAM, potentially improving overall
system performance, particularly in graphics-intensive applications like gaming, 3D
animation, and video editing.

Both AMD and Intel have introduced CPUs and motherboard chipsets supporting integrated
GPUs. AMD markets CPUs with integrated graphics as Accelerated Processing Units
(APUs), while Intel promotes similar technology under the "Intel Graphics Technology"
branding. These integrated solutions offer a balance between performance and affordability,
catering to a wide range of computing needs.

As graphics cards have advanced in processing power, their hunger for electrical power has
grown accordingly. Today's high-performance models demand substantial energy, with
examples like the GeForce Titan RTX boasting a thermal design power (TDP) of 280 watts,
while the GeForce RTX 2080 Ti Founder's Edition can consume an average of 300 watts
during video game testing. Despite strides in CPU and power supply efficiency, graphics
cards remain among the most power-intensive components in computers.

To meet these demands, modern graphics cards often feature six-pin (75 W) or eight-pin (150
W) power sockets directly connecting to the power supply. However, cooling these power-
hungry cards poses a challenge, especially in systems with multiple graphics cards requiring
power supplies exceeding 750 watts. Effective heat extraction becomes crucial in such setups.

The latest Nvidia GeForce RTX 30 series, using Ampere architecture, has pushed power
draw to new heights. Custom variants like the "Hall of Fame" RTX 3090 have been recorded
peaking at a staggering 630 watts, while standard RTX 3090 models can reach up to 450
watts. Even the RTX 3080 and 3070 draw significant power, up to 350 watts in the former's
case. Founders Edition cards employ a "dual axial flow through" cooler design, efficiently
expelling heat through fans positioned above and below the card.

Graphics cards come in various sizes to accommodate different computer builds. Some are
designated as "low profile," fitting smaller enclosures. These profiles primarily vary in
height, with low-profile cards occupying less than a standard PCIe slot's height. Length and
thickness also differ, with high-end models often spanning two or three expansion slots and
some, like the RTX 4090, exceeding 300mm in length. Opting for a lower profile card is
advisable when space is limited, although larger computer cases like mid-towers or full
towers can mitigate clearance issues.

MULTI-THREAD SCALING

Multicard scaling, a feature offered by some graphics cards, allows users to distribute
graphics processing across multiple cards. This can be achieved through the PCIe bus on the
motherboard or, more commonly, via a data bridge. Typically, the linked cards must be of the
same model, as most low-end cards lack support for this feature. AMD and Nvidia offer their
proprietary scaling methods, with AMD's CrossFireX and Nvidia's SLI (since replaced by
NVLink in the Turing generation). It's important to note that cards from different
manufacturers or architectures cannot be combined for multicard scaling.

When using multiple cards, memory size compatibility is crucial. If cards have different
memory sizes, the system will utilize the lowest value, disregarding higher values. Presently,
consumer-grade setups can utilize up to four cards, necessitating a large motherboard with the
appropriate configuration. Nvidia's GeForce GTX 590 graphics card, for example, can be
configured in a four-card setup. To ensure optimal performance, users should use cards with
similar performance metrics.

Motherboards certified for multicard configurations include models like the ASUS Maximus
3 Extreme and Gigabyte GA EX58 Extreme. Adequate power supply is essential, especially
for four-card setups, which may require a 1000+ watt supply. Effective thermal management
is crucial for powerful graphics cards, requiring well-ventilated chassis and robust cooling
solutions such as air or water cooling. Larger configurations often employ advanced cooling
methods like immersion cooling to prevent thermal throttling.

While SLI and Crossfire setups have become less common due to limited game support and
affordability constraints, they remain prevalent in specialized applications like
supercomputers, workstations for video rendering and 3D rendering, visual effects,
simulations, and AI training. Graphics drivers play a critical role in supporting multicard
setups, with specific driver versions tailored for different operating systems. Additionally,
certain operating systems or software packages offer programming APIs for applications to
perform 3D rendering efficiently.

Parts[edit]
A Radeon HD 7970 with the main heatsink
removed, showing the major components of the card. The large, tilted silver object is the GPU die,
which is surrounded by RAM chips, which are covered in extruded aluminum heatsinks. Power
delivery circuitry is mounted next to the RAM, near the right side of the card.
A modern graphics card consists of a printed circuit board on which the components are
mounted. These include:
Graphics processing unit[edit]
Main article: graphics processing unit

A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a
specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the
building of images in a frame buffer intended for output to a display. Because of the large degree
of programmable computational complexity for such a task, a modern graphics card is also a
computer unto itself.

A half-height graphics card

A heat sink is a vital component found on most modern graphics cards, serving to efficiently
dissipate the heat generated by the graphics processing unit (GPU). This heat sink is designed
to evenly spread out the heat across its surface and throughout the unit itself. Often, a fan is
mounted on the heat sink to facilitate cooling, ensuring that the GPU operates within optimal
temperature ranges. However, not all graphics cards feature heat sinks; some utilize
alternative cooling methods such as liquid cooling systems or water blocks. In the early days
of graphics cards, particularly in the 1980s and early 1990s, heat production was minimal,
and heat sinks were not necessary.

Effective thermal management is essential for modern graphics cards, especially those with
high-performance GPUs. Alongside heat sinks, advanced cooling solutions may incorporate
heat pipes, typically made of copper, to enhance thermal conductivity and heat dissipation.
These thermal solutions help prevent overheating and maintain stable performance during
intensive graphics processing tasks.

The video BIOS, or firmware, plays a crucial role in the operation of a graphics card. It
contains essential programming for initializing and controlling various aspects of the card,
including memory configuration, operating speeds, and voltage settings. However, modern
video BIOSes primarily focus on basic functions like identifying the card and initializing
display modes, while more advanced features such as video scaling and pixel processing are
handled by software drivers.

Graphics cards boast varying memory capacities, ranging from 2 to 24 GB, with some high-
end models offering up to 32 GB. This memory, often referred to as VRAM (Video Random
Access Memory), serves as dedicated storage for screen images, textures, vertex buffers, and
shader programs. Over the years, memory technology has evolved from DDR to advanced
versions like GDDR6, with corresponding increases in memory bandwidth and speed.

The RAMDAC (Random-Access-Memory Digital-to-Analog Converter) is another important


component found in older graphics cards. It converts digital signals from the GPU into analog
signals for displays with analog inputs, such as CRT monitors. However, with the widespread
adoption of digital displays, the need for a separate RAMDAC has diminished, as most
modern displays utilize digital connections like HDMI or DisplayPort. As a result, newer
graphics cards often integrate the functionality of the RAMDAC directly into the GPU,
eliminating the need for a standalone component.

A Radeon HD 5850 with a DisplayPort, HDMI and two DVI ports

Output interfaces[edit]

Video-in video-out (VIVO)


for S-Video (TV-out), Digital Visual Interface (DVI) for high-definition television (HDTV), and DE-15 for
Video Graphics Array (VGA)
The most common connection systems between the graphics card and the computer display are:
Video Graphics Array (VGA) (DE-15)[edit]

Video Graphics Array (DE-15)


Main article: Video Graphics Array

Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for
CRT displays, also called VGA connector. Today, the VGA analog interface is used for high
definition video resolutions including 1080p and higher. Some problems of this standard
are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA
transmission bandwidth is high enough to support even higher resolution playback, the picture
quality can degrade depending on cable quality and length. The extent of quality difference
depends on the individual's eyesight and the display; when using a DVI or HDMI connection,
especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is
prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image
Constraint Token (ICT) is not enabled on the Blu-ray disc.
Digital Visual Interface (DVI)[edit]

Digital Visual Interface (DVI-I)


Main article: Digital Visual Interface

Digital Visual Interface is a digital-based standard designed for displays such as flat-panel
displays (LCDs, plasma screens, wide high-definition television displays) and video projectors.
There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and
electrical noise, corresponding each pixel from the computer to a display pixel, using its native
resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via
simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.
Video-in video-out (VIVO) for S-Video, composite video and component video[edit]

VIVO connector
Main article: Video-in video-out

These connectors are included to allow connection with televisions, DVD players, video
recorders and video game consoles. They often come in two 10-pin mini-DIN
connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-
Video in and out plus composite video in and out), or 6 connectors (S-Video in and
out, component YPBPR out and composite in and out).
High-Definition Multimedia Interface (HDMI)[edit]

High-Definition Multimedia Interface


Main article: HDMI

HDMI is a compact audio/video interface for transferring uncompressed video data and
compressed/uncompressed digital audio data from an HDMI-compliant device ("the source
device") to a compatible digital audio device, computer monitor, video projector, or digital
television.[52] HDMI is a digital replacement for existing analog video standards. HDMI
supports copy protection through HDCP.
DisplayPort[edit]

DisplayPort
Main article: DisplayPort

DisplayPort is a digital display interface developed by the Video Electronics Standards


Association (VESA). The interface is primarily used to connect a video source to a display
device such as a computer monitor, though it can also be used to transmit audio, USB, and other
forms of data.[53] The VESA specification is royalty-free. VESA designed it to replace VGA, DVI,
and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables
consumers to use DisplayPort fitted video sources without replacing existing display devices.
Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected
to complement the interface, not replace it.[54][55]
Motherboard interfaces[edit]
Main articles: Bus (computing) and Expansion card

ATI Graphics Solution Rev 3 from 1985/1986,


supporting Hercules graphics. As can be seen from the PCB the layout was done in 1985, whereas the
marking on the central chip CW16800-A says "8639" meaning that chip was manufactured week 39,
1986. This card is using the ISA 8-bit (XT) interface.

The chronological progression of connection systems between graphics cards and


motherboards highlights the evolution of computer hardware and interconnectivity standards:

1. S-100 bus (1974): Introduced as part of the Altair 8800, the S-100 bus was the first
industry-standard bus for microcomputers.
2. ISA (1981): The Industry Standard Architecture, introduced by IBM, became
dominant in the 1980s. It was an 8- or 16-bit bus clocked at 8 MHz.
3. NuBus (1984): Used in Macintosh II computers, NuBus was a 32-bit bus with an
average bandwidth of 10 to 20 MB/s.
4. MCA (1987): IBM's Micro Channel Architecture, introduced in 1987, was a 32-bit
bus clocked at 10 MHz.
5. EISA (1988): Released to compete with IBM's MCA, the Extended Industry Standard
Architecture was compatible with ISA and operated as a 32-bit bus clocked at 8.33
MHz.
6. VLB (VESA Local Bus) (1992): An extension of ISA, VLB was a 32-bit bus clocked
at 33 MHz, providing faster data transfer.
7. PCI (Peripheral Component Interconnect) (1993): Replaced earlier bus standards,
including EISA, ISA, MCA, and VLB. PCI offered dynamic connectivity between
devices and operated as a 32-bit bus clocked at 33 MHz.
8. UPA (UltraPort Architecture) (1995): Introduced by Sun Microsystems, UPA was a
64-bit bus clocked at 67 or 83 MHz.
9. USB (Universal Serial Bus) (1996): Initially used for miscellaneous devices, USB
saw the introduction of USB displays and display adapters.
10. AGP (Accelerated Graphics Port) (1997): Dedicated to graphics, AGP was a 32-bit
bus clocked at 66 MHz, providing faster data transfer for graphics-intensive
applications.
11. PCI-X (PCI eXtended) (1998): An extension of the PCI bus, PCI-X increased the
bus width to 64 bits and clock frequency to up to 133 MHz, offering enhanced
performance.
12. PCI Express (PCIe) (2004): A point-to-point interface, PCIe offered significantly
faster data transfer rates compared to AGP. It became the standard for modern
graphics cards, providing improved performance and scalability.

This chronological overview demonstrates the steady advancement in graphics card


connectivity, driven by the demand for faster data transfer speeds and more efficient
communication between components.

You might also like