Securing Generic AI across the entire technology stack

0


Research shows that By 2026, more than 80% of enterprises will leverage generative AI models, APIs, or applications, up from less than 5% today.

This rapid adoption brings new considerations regarding cybersecurity, ethics, privacy, and risk management. Only 38% of companies using generative AI today mitigate cybersecurity risks, and only 32% work to address model inaccuracy.

My conversations with security professionals and entrepreneurs have focused on three key factors:

  1. Adopting enterprise generative AI brings additional complexities to security challenges, such as over-privileged access. For example, while traditional data loss prevention tools effectively monitor and control data flow in AI applications, they often fall short with more subtle factors such as unstructured data and ethical rules or biased content within signals.
  2. Market demand for various GenAI security products is closely linked to the trade-off between ROI potential and the inherent security vulnerabilities of the underlying use cases for which the applications are employed. This balance between opportunity and risk continues to evolve based on the ongoing evolution of AI infrastructure standards and the regulatory landscape.
  3. Like traditional software, generative AI must be secured at all architecture levels, especially the core interface, application, and data layers. Below is a snapshot of the different security product categories within the technology stack, highlighting areas where security leaders experience significant ROI and risk potential.
Table showing data to secure the GenAI technology stack

Image Credit: Forgepoint Capital

Widespread adoption of GenAI chatbots will prioritize the ability to accurately and quickly intercept, review, and validate inputs and associated outputs at scale without degrading the user experience.

Interface Layer: Balancing Usability with Security

Businesses see immense potential in customer-facing chatbots, especially leveraging customized models trained on industry and company-specific data. The user interface is susceptible to accelerated injection, a type of injection attacks aimed at manipulating the model’s response or behavior.

Furthermore, chief information security officers (CISOs) and security leaders are under increasing pressure to enable GenAI applications within their organizations. While the consumerization of the enterprise has been an ongoing trend, the rapid and widespread adoption of technologies like ChatGPT has given rise to an unprecedented, employee-led drive for their use in the workplace.

Widespread adoption of GenAI chatbots will prioritize the ability to accurately and quickly intercept, review, and validate inputs and associated outputs at scale without degrading the user experience. Existing data protection tooling often relies on predetermined rules, resulting in false positives. Tools like Protect AI’s Rebuff and Harmonic Security leverage AI models to dynamically determine whether data passing through a GenAI application is sensitive.



Source link

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *