BBC threatens AI firm with legal action over unauthorised content use - Capital Business
Connect with us

Hi, what are you looking for?

Africa

BBC threatens AI firm with legal action over unauthorised content use

JUNE 21 – The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content “verbatim” without its permission.

The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used.

It is the first time that the BBC – one of the world’s largest news organisations – has taken such action against an AI company.

In a statement, Perplexity said: “The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.”

It did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment.

The BBC’s legal threat has been made in a letter to Perplexity’s boss Aravind Srinivas.

“This constitutes copyright infringement in the UK and breach of the BBC’s terms of use,” the letter says.

The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content.

Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news.

“It is therefore highly damaging to the BBC, injuring the BBC’s reputation with audiences – including UK licence fee payers who fund the BBC – and undermining their trust in the BBC,” it added.

Web scraping scrutiny

Chatbots and image generators that can generate content response to simple text or voice prompts in seconds have swelled in popularity since OpenAI launched ChatGPT in late 2022.

But their rapid growth and improving capabilities has prompted questions about their use of existing material without permission.

Much of the material used to develop generative AI models has been pulled from a massive range of web sources using bots and crawlers, which automatically extract site data.

The rise in this activity, known as web scraping, recently prompted British media publishers to join calls by creatives for the UK government to uphold protections around copyrighted content.

In response to the BBC’s letter, the Professional Publishers Association (PPA) – which represents over 300 media brands – said it was “deeply concerned that AI platforms are currently failing to uphold UK copyright law.”

It said bots were being used to “illegally scrape publishers’ content to train their models without permission or payment.”

It added: “This practice directly threatens the UK’s £4.4 billion publishing industry and the 55,000 people it employs.”

Many organisations, including the BBC, use a file called “robots.txt” in their website code to try to block bots and automated tools from extracting data en masse for AI.

It instructs bots and web crawlers to not access certain pages and material, where present.

But compliance with the directive remains voluntary and, according to some reports, bots do not always respect it.

The BBC said in its letter that while it disallowed two of Perplexity’s crawlers, the company “is clearly not respecting robots.txt”.

Mr Srinivas denied accusations that its crawlers ignored robots.txt instructions in an interview with Fast Company last June.

Perplexity also says that because it does not build foundation models, it does not use website content for AI model pre-training.

‘Answer engine’

The company’s AI chatbot has become a popular destination for people looking for answers to common or complex questions, describing itself as an “answer engine”.

It says on its website that it does this by “searching the web, identifying trusted sources and synthesising information into clear, up-to-date responses”.

It also advises users to double check responses for accuracy – a common caveat accompanying AI chatbots, which can be known to state false information in a matter of fact, convincing way.

In January Apple suspended an AI feature that generated false headlines for BBC News app notifications when summarising groups of them for iPhones users, following BBC complaints.

By BBC

Visited 1 times, 1 visit(s) today

More on Capital Business

Kenya

The bill, currently at the committee stage in the Senate, has drawn opposition from traders in Mombasa and Eldoret over the past week. Business...

World

The Gordie Howe International Bridge, connecting the Canadian province of Ontario to the US state of Michigan, will not open until Ottawa "treats the...

World

NAIROBI, Kenya, Jan 25 – US President Donald Trump has threatened to slap a 100% tariff on Canadian goods if the country strikes a...

World

DEC 16 – US President Donald Trump has filed a multi-billion-dollar defamation lawsuit against the BBC over an edit of his 6 January 2021...

Technology

NOV 14 – The makers of artificial intelligence (AI) chatbot Claude claim to have caught Chinese government hackers using the tool to perform automated...

Top Story

NAIROBI, Kenya, Nov 7 – Troubled retailer Uchumi Supermarkets is facing a fresh hurdle in its long-running recovery bid, with a lingering court battle...

Top Story

NAIROBI, Kenya, Nov 4 – The Kenya Association of Manufacturers (KAM) has raised concern over the ongoing unrest in Tanzania following the contested election...

Lifestyle

JULY 10 – President Donald Trump said he was planning to impose a 50% tax on goods made in Brazil, escalating his fight with...