Skip to content

Aligning with the Executive Order: The Triumvirate of People, Processes, and Technology in AI Governance

The recent executive order (EO) concerning artificial intelligence (AI) has cast a spotlight on the importance of ensuring ethical and secure deployment of AI across sectors. Like with most problems in the "IT" space, navigating the future regulatory landscape of AI requires a harmonious balance of People, Processes, and Technology. When your security program and Security Operation Center are utilizing all of these tactics, you can effectively reduce the risk of both AI and future regulatory requirements.

1. People: While the executive order is thankfully light on mandates, it serves as a stark reminder of the vital role human resources play. Staff training now extends beyond typical security protocols, emphasizing a deeper understanding of AI, especially the intricacies of its ethical usage. Employees must be well-informed the dangers of over-reliance on automation, data leakage, and intellectual property considerations, especially when leveraging external datasets or algorithms.

  • Security Awareness Training (SAT): A robust SAT program has always been a cornerstone of cybersecurity. With the dynamic challenges presented by AI, its value is further magnified. Periodic incorporation of AI topics into your monthly training cycle, regular reminders, and the integration of AI-driven scenarios in phishing simulations can greatly amplify the effectiveness of SAT. It's often espoused that SAT delivers the best return on investment of any security control. Given the unique challenges associated with AI, this assertion is even more compelling.

2. Processes: The EO advocates for the development of organizations and standards which will potentially regulate AI usage within certain organizations. While it doesn't specifically call for regulation of companies that use AI other than companies within the healthcare and critical infrastructure verticals, we know from experiences that regulation may experience scope-creep. At any rate, organizations can and should adopt certain process, standards and policy controls to protect against potential new threat vectors.

  • Policies on AI Usage: Establishing clear policies on how and when AI can be used is paramount, covering areas such as data privacy and ethical considerations.
  • Standards for AI models and third-party applications: Organizations should set benchmarks for permissible AI models and applications, ensuring alignment with both ethical and security standards.
  • Processes for AI Deployment: A structured approach to deploying AI models promotes their ethical and effective use.
  • Integration into SDLC: Integrating AI considerations into the software development life cycle emphasizes the vital role of Software Bill of Materials (SBOMs) and the necessity of partnering with vendors that provide vetted APIs.

3. Technology: Even though the EO only specifically mentions penetration testing as a control for AI, there are other existing technologies which can play a role in helping regulate AI usage within your organization.

  • Web Gateways: These tools offer the ability to regulate data flow and can be invaluable for organizations that have policies against certain AI usage. By configuring these gateways, firms can limit and control unauthorized AI access or usage.
  • Insider Threat and Behavioral Analytics Tools: Going beyond the broad controls of web gateways, these tools provide a more granular look into how your organization is utilizing AI. With the right Insider Threat technology, security analysts can put together use-cases to understand whether or not your employees are misusing AI internally. At Apollo Information Systems, our Security Operation Center has already tuned our UEBA tools to look for usage of AI along with other risky behaviors.
  • Private Instances of AI Models: Recognizing data privacy concerns, certain AI developers will eventually offer isolated instances of their models. This will become a basic necessity for any organization which encourages its employees to be more efficient with AI models, as it will help ensure that inputs and outputs remain property of the company, rather than the developer of the model.

In uniting the triad of People, Processes, and Technology in line with the guidance of the EO, businesses can establish a robust foundation for the ethical, responsible, and secure deployment of AI. This approach not only aligns with the current best practices, but also prepares organizations to swiftly adapt to potential future regulations.