What can – and can’t – you do with generative AI?

Generative AI promises to solve many problems, though it’s advisable to approach it with caution at this early stage.

Monday, 28 October, 2024

In recent decades, internet users have seen a procession of supposedly game-changing developments and new technologies emerge.

Some, like full fibre broadband and streaming media services, turned out to be groundbreaking.

Others, including virtual reality and the blockchain, quickly fizzled out – as promising yet as unrealised as 3D televisions or quantum computing.

The metaverse promised to change our lives until everyone realised it was faintly ludicrous, while NFTs were briefly the future of art despite being little more than a marketing scam.

What, then, should we make of similarly hyperbolic claims about generative AI?

Hardware manufacturers are breathlessly claiming gen AI will transform everything from internet searches to graphic design and content creation.

It’s already led to job losses in the creative industries, with businesses choosing free AI content generation ahead of crafted copywriting, and mocked-up images over actual photos.

The generation game

Artificial intelligence was first discussed in Greek mythology, and computerised AI research traces its origins back to the 1950s.

However, generative AI is rather different.

You see examples of it in modern search results, with a diamond in Google results beside a condensed summary of information relating to the search term you input.

Platforms like ChatGPT and Jasper will respond to a request for information on any subject by producing paragraphs of text almost instantly.

AI is capable of powering chatbots more effectively than manually governed algorithm-controlled responses, as well as creating media content including songs, poems and corporate text.

It’s recently been used to create two ads which were broadcast on ITV, as well as (unsuccessfully) creating a fake image of the Prime Minister for the front page of the relaunched London Standard.

The tech platforms would like us all to rely on AI in future, though they’d prefer we didn’t consider any copyright issues.

Effectively, these tools look at everything on the surface web – from the professional to the amateurish, from the sublime to the ridiculous – and regurgitate it in slightly revised form.

They do so without paying any royalties to the original content creators, whose work they are reproducing in an inferior and condensed form.

Generative AI doesn’t actually generate so much as repackage, which has led to a myriad of issues everyone should be aware of before relying on it…

New and unproven

Gen AI is still new technology – as undeveloped as those legless Nintendo Wii-style avatars which claimed to be our metaverse representations.

However, there are global questions of IP theft and copyright infringement yet to be answered, which could ultimately bring down the entire industry.

Moreover, the internet itself is hardly a repository of factually accurate reporting. Even Wikipedia can be wrong, while mainstream news platforms are as biased as their owners.

The biased, false, outdated (occasionally libellous) content found on websites, forums and social media is being blandly repackaged by gen AI platforms and presented to us as facts.

AI content can be totally fabricated; a recent legal case involved a junior lawyer using gen AI which invented court cases and then quoted them as precedent. This was not well received.

There are wider (albeit more technical) concerns about this sector’s sustainability, opacity, legal liability and cybersecurity risks. Deepfake generation, inconsistency…the list goes on.

So what can I do with it?

For now, we’d suggest not very much.

Law firms are advising their staff not to publish any generative AI content, and company owners or marketing managers should also approach AI-spawned text with great caution.

If this content subsequently turns out to be false, libellous or misleading, there’ll be no defence in claiming a software algorithm wrote it.

Even Google AI search entries are often unreliable, inaccurate and/or unhelpful.

We’d suggest using AI to summarise lengthy articles/email threads/publications, or produce first-drafts of documents before they’re (diligently) reviewed and edited.

From a technical perspective, gen AI could improve chatbot functionality, translation services and software coding, while there’s hope it might improve medical analysis and research.

It’s also fun to play around with random graphic creation and text production, though from a consumer or small business perspective, it’s perhaps safest not to rely on generative AI yet.

Neil Cumins author picture

By:

Neil is our resident tech expert. He's written guides on loads of broadband head-scratchers and is determined to solve all your technology problems!