Report: 40% of employees share sensitive data with AI tools
- 1 day ago
- 1 min read
The use of AI in the workplace is growing rapidly, but many organizations are struggling to secure it. According to Cyberhaven, nearly 40% of employee interactions with AI tools involve sensitive corporate data. Employees often use external tools like Claude and DeepSeek outside official channels, creating so-called Shadow AI.
A divide is emerging between companies: some aggressively adopt AI and deploy hundreds of tools, while others lag behind due to security concerns and legacy systems. Meanwhile, 82% of the most popular GenAI applications are considered risky, and about one-third of employees access them through personal accounts.
Chinese AI models, especially DeepSeek, are quickly gaining popularity and now account for around 50% of endpoint-based AI usage. At the same time, many employees do not fully understand the risks of entering sensitive data into AI systems, where that information may leave corporate control.
Experts expect AI use to continue growing, particularly through specialized tools and AI agents. As a result, companies need stronger data governance, security, and monitoring.



