This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Five best practices to protect your data privacy when implementing Gen AI
Gen AI is becoming increasingly popular, with many companies integrating it into their operations to enhance efficiency and innovation.
Furthermore, a McKinsey & Company survey shows more companies are using AI across multiple business functions — half of respondents reported adoption in two or more areas in 2024, up from less than a third in 2023.
Similarly, according to Statista, almost 11% of employees working at global firms have tried using ChatGPT in the workplace at least once.
However, this widespread adoption brings new security challenges, particularly regarding data privacy. For example, of those who used ChatGPT at work, almost 5% have put confidential corporate data into the AI-powered tool.
In fact, nearly one-third of employees have admitted to placing sensitive data into GenAI tools, making data leaks a top concern.
According to a report by AI security solutions provider Hidden Layer, more than three-quarters of companies either using or exploring AI have experienced AI-related security breaches.
How are businesses using Gen AI?
A study by Harmonic Security titled GenAI Unleashed found that Gen AI was used by employees to upload data to 8.25 apps on average every month.
The study found that content creation, summarising, and editing were overwhelmingly popular among workplace users, with around 47% of prompts asking apps for help in those areas.
They were followed by software engineering (15%), data interpretation, processing, and analysis (12%), business and finance (7%) and problem-solving/troubleshooting (6%).
The most popular Gen AI tool by far was ChatGPT, used by 84% of users — 6 times more popular than Google Gemini (14%), the next most popular tool.
Alastair Paterson, co-founder and CEO of Harmonic Security, explains, “With a choice of over 5,000 GenAI apps and a high number of average apps used by employees, there are too many out there for IT departments to properly keep track of using existing tools. We particularly urge organisations to pay attention to apps that are training on customer data.”
How can companies ensure data privacy when using Gen AI tools?
TechInformed consulted industry experts to compile a list of best practices for safeguarding data privacy in the era of Gen AI; here are our top tips.
-
1. Avoid inputting personal or sensitive information into Gen AI LLMs
Generative AI tools and large language models (LLMs) can store and repurpose data provided to them. To prevent unauthorised access, avoid inputting personal or proprietary information into these tools.
Sebastian Gierlinger, VP of Engineering at Storyblok, says, “The biggest threat we are aware of is the potential for human error when using generative AI tools to result in data breaches. Employees sharing sensitive business information while using services such as ChatGPT risk that data will be retrieved later, which could lead to leaks of confidential data and subsequent hacks.”
He says the solution could be as simple as educating employees about how to use tools like ChatGPT safely.
That said, Leanne Allen, head of AI at KPMG UK, adds that “there are security measures that can remove sensitive or personal data automatically from prompts before they are used by a generative AI model. These measures can help mitigate the risk of data leaks and breaches of legally protected information – especially since human error will likely still occur.”
-
2. Create and enforce an AI & Privacy policy
A comprehensive company policy on AI usage and data privacy can help mitigate many risks associated with Gen AI tools.
Angus Allan, senior product manager at CreateFuture, says, “Establishing a clear AI policy from the outset can streamline this entire process by enabling businesses to tailor controls to their risk tolerance and specific use case.”
Allan stresses the importance of tailoring any policy to the specific company and addressing how AI will be uniquely leveraged for that industry and use case.
“An AI policy not only pre-empts data privacy risks but also sets clear expectations reduces ambiguity, and empowers teams to focus on solving the right problems,” he says.
“In an era of GDPR and increased regulatory scrutiny of AI, it’s imperative for every business to get these basics right to minimise data risks and protect customers.”
-
3. Manage data privacy settings
Most Gen AI tools have features that allow users to disable data storage. Employees should navigate to the tool’s settings and disable such features to prevent company data from being used for AI model training.
Patrick Spencer, VP of corporate marketing at Kiteworks, explains, “A typical disablement feature looks something like this: navigate to Settings and, under Data Control, disable the “Improve Model for Everyone” option. Regularly review permissions to prevent unnecessary data access, ensure privacy, and thwart unauthorised access.”
Deleting chat histories in AI tools can also reduce the risk of sensitive information being stored, he says.
“OpenAI typically deletes chats within 30 days; however, their usage policy specifies that some chats can be retained for security or legal reasons. To delete chats, access the AI tool’s settings and find the option to manage or delete chat history.”
He adds this should be done periodically to maintain data privacy and minimise vulnerabilities.
-
4. Regularly change passwords; or ditch them altogether
When using passwords, they should be long, complex, and unique for each account, including those linked to AI systems. However, CEO of cybersecurity startup Teleport, Ev Kontsevoy, advocates for moving away from passwords altogether.
He details, “Every enterprise housing modern infrastructure should cryptographically secure identities. This means basing access not on passwords but on physical-world attributes like biometric authentication and enforcing access with short-lived privileges that are only granted for individual tasks that need performing.
Cryptographic identities consist of three components: the device’s machine identity, the employee’s biometric marker, and a PIN. Kontsevoy says businesses can significantly reduce the attack surface threat actors can exploit with social engineering tactics by using them.
“If you need a poster child for this security model, it already exists, and it’s called the iPhone. It uses facial recognition for biometric authentication, a PIN code, and a Trusted Platform Module chip inside the phone that governs its ‘machine identity.’ This is why you never hear about iPhones getting hacked.”
-
5. Disconnect your systems from the internet
Tony Hasek, CEO and co-founder of cybersecurity firm Goldilock offers a unique solution: physical network segmentation, the ability to connect and disconnect networks at the press of a button.
“Through a hardware-based approach, physical network segmentation enables users to segment all digital assets, from LLMs to entire networks, remotely, instantly and without using the internet,” he says.
He adds that businesses can reduce the level of sensitive data exposure by rethinking which parts of their networks they keep online and moving away from an “always-on” model.
“Companies who are building their own internal large language models (LLMs) in-house are essentially creating a repository for their company’s most valuable data and intellectual property, including customer and employee data, trade secrets, and product strategies. This makes LLMs and other Gen AI models a prime target for cybercriminals.”
He concludes, “Keeping Gen AI models offline until they are needed to generate a response is a critical step in ensuring the valuable data they contain is kept safe, and physical network segmentation can ensure networks can switch from online to offline seamlessly.”
Now that you’ve handled security and data privacy, you can find out how to lead the adoption of Gen AI in your enterprise (when half of all uptake is happening outside the IT department) — read more here.
#BeInformed
Subscribe to our Editor's weekly newsletter