SAN FRANCISCO — British officials are advising firms against incorporating artificial intelligence chatbots into their operations, saying that a growing body of research has revealed that they can be misled into carrying out damaging tasks.
In a pair of blog posts published recently, Britain’s National Cyber Security Centre (NCSC) said that experts had not yet got to grips with the potential security problems tied to algorithms that can generate human-sounding interactions — dubbed large language models, or LLMs.
The AI-powered tools are seeing early use as chatbots that some envision displacing not just internet searches but also customer service work and sales calls.
The NCSC said that could carry risks, particularly if such models were plugged into other elements organisation’s business processes. Academics and researchers have repeatedly found ways to subvert chatbots by feeding them rogue commands or fooling them into circumventing their own built-in guardrails.
For example, an AI-powered chatbot deployed by a bank might be tricked into making an unauthorised transaction if a hacker structured their query just right.
“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC said in one of its blog posts, referring to experimental software releases.
“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”
Authorities across the world are grappling with the rise of LLMs, such as OpenAI’s ChatGPT, which businesses are incorporating into a wide range of services, including sales and customer care.
The security implications of AI are also still coming into focus, with authorities in the US and Canada saying they have seen hackers embrace the technology.
A recent Reuters/Ipsos poll found many corporate employees were using tools like ChatGPT to help with basic tasks, such as drafting emails, summarising documents and doing preliminary research.
Some 10% of those polled said their bosses explicitly banned external AI tools, while a quarter did not know if their company permitted the use of the technology.