AI Security

Can You Trust Your AI Coworker?

Share:

Generative AI tools like OpenAI’s ChatGPT and Microsoft’s Copilot are quickly becoming integral to our daily business lives. However, they bring significant privacy and security concerns that need to be addressed.

Privacy Concerns

In May, Microsoft’s new Recall tool raised alarms among privacy advocates. This tool, which takes screenshots of your laptop every few seconds, was called a “privacy nightmare.” The UK’s Information Commissioner’s Office is investigating the tool’s safety before its upcoming release in Copilot+ PCs.

OpenAI’s ChatGPT also faces scrutiny for its screenshot capabilities in its forthcoming macOS app, which privacy experts worry could capture sensitive data. These concerns have led the US House of Representatives to ban Microsoft’s Copilot among staff due to the risk of data leaks to non-House approved cloud services.

Market analyst Gartner has warned that using Copilot for Microsoft 365 could expose sensitive data both internally and externally. Google recently had to adjust its new search feature, AI Overviews, after it provided bizarre and misleading answers that went viral.

Risks of Data Exposure

Generative AI systems are massive data collectors. Camden Woollven, group head of AI at GRC International Group, notes that these systems are “essentially big sponges,” absorbing vast amounts of information from the internet to train their language models. This makes them a potential risk for sensitive data exposure.

Steve Elcock, CEO of Elementsuite, points out that AI companies are “hungry for data to train their models,” which could lead to sensitive information being unintentionally shared and possibly extracted later through clever prompting. Additionally, these AI systems could be targeted by hackers, posing further risks of data breaches.

Proprietary AI and Employee Monitoring

Phil Robinson from Prism Infosec highlights that proprietary AI tools like Microsoft Copilot could still pose risks. Without proper access controls, employees might access sensitive data, such as pay scales or M&A activities, which could be leaked or sold.

There’s also the concern that AI tools could be used to monitor staff, infringing on their privacy. Microsoft assures that Recall’s snapshots stay local to your PC and are controlled by the user. However, Elcock suggests it’s only a matter of time before this technology could be used for employee surveillance.

Mitigating Risks

To improve privacy and security, businesses and employees should avoid sharing confidential information with public AI tools like ChatGPT or Google’s Gemini. When using AI, be generic in prompts to avoid sharing too much detail. For example, ask for a proposal template rather than sharing specific budget details. Always validate AI-generated information and review any code written by AI.

Microsoft emphasizes the need for correct configuration and applying the “least privilege” principle, ensuring users only access the information they need. ChatGPT users should note that their data is used for training unless they opt out in the settings or use the enterprise version.

Security Measures

Companies integrating AI into their products claim to prioritize security and privacy. Microsoft provides control options for its Recall feature. Google maintains its foundational privacy protections, and OpenAI offers tools to manage data usage and provides enterprise versions with extra controls. OpenAI emphasizes that its models do not learn from user data by default.

Looking Ahead

As AI systems become more sophisticated, the risks will continue to grow. Woollven points out that multimodal AI like GPT-4 can analyze and generate not just text, but also images, audio, and video. This expansion means companies must safeguard a broader range of data.

Ultimately, AI should be treated like any other third-party service. Woollven advises not to share anything with AI that you wouldn’t want publicly broadcasted. With careful management and awareness, businesses can navigate the privacy and security challenges posed by their new AI coworkers.

Disclaimer


NextNews strives for accurate tech news, but use it with caution - content changes often, external links may be iffy, and technical glitches happen. See full disclaimer for details.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.