How do computer graphics cards work?

Thursday, 23 April, 2020

It’s common knowledge that computers have a central processing unit, or CPU, which controls pretty much everything the machine does.

This silicon-based microprocessor chip is effectively an electronic brain, handling tasks like processing user inputs and running software packages.

What’s less commonly known is that most modern computers have another microprocessor chip, embedded on a second removable card known as a motherboard.

This is the computer graphics card, at the heart of which sits a graphics processing unit – better known as a GPU.

To the untrained eye, these microprocessors look very similar. Yet computer graphics cards are very different.

Manufactured by specialist companies like Nvidia, Intel and AMD, a high-end graphics card with twin cooling fans could cost over £1,000.

However, far cheaper models are still capable of processing graphics for any contemporary computer game.

Core strength

The first graphics card was released in 1981, displaying green or white text against a black background.

Of course, we’ve come a long way since then.

Modern GPUs contain thousands of individual cores, each capable of performing one instruction at any given millisecond.

This is known as parallel processing, and it means GPUs are able to carry out huge amounts of work simultaneously.

Although the scope of these processing tasks is relatively limited, their ability to perform calculations at lightning speed is ideal for intensive graphical displays.

They are charged with taking digital data and turning it into a visual representation, which is then output on a computer screen.

A good-quality monitor can display 120 separate frames (or still pictures) in a single second, each one slightly different from its predecessor.

Data on future frames also needs to be stored in memory as a buffer, ready to be displayed when the current frame is finished.

The more memory (known as random access memory, or RAM) a graphics card has, the more smoothly it’ll output each frame.

All this requires a huge amount of processing resources, which a CPU simply doesn’t have.

Although CPUs run more quickly than GPUs, they’re usually manufactured with a maximum of 18 cores, making them less able to multitask.

Integrated CPU graphics cards are fine for displaying webpages and word processing, but they wouldn’t be able to display 2,073,600 individual pixels 120 times per second.

(That’s how many times a screen set to the standard 1920 x 1080 resolution needs refreshing while it’s displaying at 120fps.)

Although 120fps is a maximum display rate rather than a default one, it delivers seamless moving images which our eyes struggle to distinguish from real-world motion.

Picture perfect

GPUs can perform certain tasks a hundred times more quickly than CPUs.

This has seen them being used for more than simply displaying images. Computer graphics cards are often lashed together in huge arrays to perform complex mathematical calculations.

However, from a consumer perspective, GPUs are best suited to displaying three-dimensional graphics like the ones in computer games.

They’re also capable of advanced display techniques like ray tracing, which produces an image by tracking a path of light as it moves around an image.

This can simulate how light and shade would interact with objects, providing a highly accurate replication of illumination we see around us in daily life.

Detailed reflections and shadows add greater atmosphere and realism to the images found in games, achieving new standards of detail and accuracy.

Neil Cumins author picture

By:

Neil is our resident tech expert. He's written guides on loads of broadband head-scratchers and is determined to solve all your technology problems!