Brickinfo English
GenAI Risks: How AI Tools Are Breaking Traditional Cybersecurity
Brickinfo News Agency – The rapid adoption of generative AI (GenAI) is exposing critical vulnerabilities in traditional cybersecurity awareness efforts, as employees increasingly turn to unsanctioned tools to complete work tasks. This trend is creating a gap between modern workflow habits and existing security controls, unknowingly putting sensitive corporate data and intellectual property at high risk of exposure.
Data from Gartner indicates a significant shift in workplace behavior, revealing that 57% of employees now use personal GenAI accounts for work-related purposes. Furthermore, 33% of staff admit to inputting sensitive company information into public or unapproved GenAI platforms. This “shadow AI” usage, where 36% of workers download unapproved tools onto work devices, significantly elevates the risk of data leaks and regulatory non-compliance.
Beyond internal risks, the rise of AI technology has equipped threat actors with tools to launch more sophisticated campaigns. Statistics show that 35% of organizations have already faced deepfake incidents, while 84% of cybersecurity leaders observe that phishing attacks have become significantly more advanced. The frequency of AI-powered malicious emails has doubled in just two years, making human detection increasingly difficult without specialized training.
Richard Addiscott, VP Analyst at Gartner, highlighted the urgency of the situation, stating: “The integration of GenAI tools into daily workflows outpaces existing security controls, while threat actors are exploiting the same technology to sharpen their campaigns.” He cautioned that without executive support, risk management remains a challenge, noting: “Failing to secure senior leadership buy-in for GenAI governance and behaviour change initiatives can undermine efforts to operationalise effective risk management.”
To mitigate these risks, organizations are encouraged to move beyond static policies and foster a more adaptive security culture. This includes implementing clear rules for responsible AI use, prioritizing data minimization, and updating employee training to include simulations of deepfakes and AI-driven social engineering. Experts emphasize that human oversight remains essential for all generated outputs to ensure accuracy and maintain security standards in an AI-driven environment.
