NEWS7 February 2024

Metomic develops ChatGPT data security tool

AI News Privacy Technology UK

UK – Data security company Metomic has launched a tool that business leaders can use to monitor what sensitive data employees are uploading to generative AI platform ChatGPT.

yellow exclamation warning triangle next to a person using a laptop

The browser plugin will allow IT and security managers to identify when an employee logs in to OpenAI’s ChatGPT tool.

Users of the plugin can receive alerts if employees upload data including personally identifiable information (PII), security credentials or intellectual property.

The tool has 150 existing data classifiers, designed to recognise common data risks, and businesses can use it develop customised classifiers.

Employees inadvertently sharing sensitive company data – which can then be used by an AI model for training – has resulted in several businesses prohibiting employee use of generative AI platforms. 

London-headquartered Metomic, founded in 2018, offers software used by businesses to protect sensitive data in software-as-a-service applications including Slack and Microsoft Teams.

Rich Vibert, chief executive, Metomic, said: “Very few technology solutions have had the impact of OpenAI’s ChatGPT platform. But because of the large language models that underpin the generative AI technology, many business leaders are apprehensive to leverage the technology, fearing their most sensitive business data could be exposed.”

Vibert said businesses using the plugin tool would “gain all the advantages that come with ChatGPT while avoiding serious data vulnerabilities”.

@RESEARCH LIVE

0 Comments