Bits and bytes and bandwidth speeds

Saturday, 9 May, 2020

Computers can do some amazing things.

They can perform advanced modelling, display streaming media files, process online gaming, and generate website code as you move content around a screen.

It’s easy to forget that each of these activities requires every single pixel colour, user input and mathematical calculation to be broken down into binary data – on or off.

Considering everything a computer does is based on the digital equivalent of a light switch, it’s amazing they’re as powerful as they are.

However, the binary input that lets computing devices understand instructions has led to some fairly confusing terminology.

This is what it all means.

Getting the bit between your teeth

The word bit is used to describe a single binary decision. Billions of these are required to run contemporary computer software.

Each bit is packaged together in bundles of eight, forming a byte. This allows for an exponential increase in potential outcomes.

A bit can only have two outcomes – 0 or 1 – whereas a byte offers far more potential. For instance, 0 1 0 0 1 1 1 0 is one of 256 possible permutations within a byte.

In the 1980s, home computers like the Sinclair Spectrum were known as eight-bit machines because their central processing units could access eight bits of data in one instruction.

Then came 16-bit machines like the Atari ST and Commodore Amiga, which could handle 256 * 256 permutations at once – 65,536 in total.

It’s easy to guess which machines were better at processing graphics, running complex programs and conducting quick calculations.

Then in 1991, Sir Tim Berners-Lee developed the World Wide Web. And suddenly, bits and bytes were being used to measure speed, as well as volume.

Gaining the kilos

It may seem strange that data volumes are recorded in bytes – kilobyte, megabyte – yet internet speeds are calculated in bits – kilobits per second, megabits per second, etc.

This is because data is stored in bytes. As we’ve seen, a bit isn’t much use in isolation.

However, internet data is sent and received one bit at a time. Bytes are broken into their component elements and sent along whichever route is quickest at that millisecond.

When each bit arrives at its destination, it’s reintegrated into the byte it formerly occupied, which is then used to provide instructions to the host device.

Given the random nature in which each byte’s constituent parts arrive, attempting to measure internet speeds in bytes per second would be impractical.

However, it’s easy to calculate how long a specific volume of data ought to take to transmit or receive.

A connection of 100 megabits per second would need one second to transfer a file of 12.5 megabytes.

To calculate the other way, check how large the file you’re wanting to send or receive is in bytes, multiply it by eight, and divide by the number of bits per second your line supports.

A one-megabyte file contains a million bytes, or eight million bits. If your connection supports a download rate of eight megabits per second (eight million bits), it’ll take one second.

It’s worth noting downloads tend to be given priority on domestic internet connections – often by a ratio of ten to one – because we download data far more than we upload.

A line that supports downloads of 10Mbps (megabits per second) may only support 1Mbps uploads.

In written form, you can tell bits and bytes apart easily – bits are represented with a lowercase letter (Mbps), whereas bytes are written using an uppercase letter (MB).

Neil Cumins author picture

By:

Neil is our resident tech expert. He's written guides on loads of broadband head-scratchers and is determined to solve all your technology problems!