news
December 20, 2023by Chris Strand

Breaking Down the White House’s Executive Order on AI

On October 30th this year, President Biden signed an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of AI. The order outlines a comprehensive approach to ensure that AI is developed and used responsibly and ethically. It is a significant step forward and similar to other nations that endeavor to ensure that AI is used for the benefit of society and not to its detriment. The White House EO on AI is heavily positioned around three important concepts: data protection, privacy, and disclosure. 

These three concepts are critical priorities given AI’s role in modeling and accelerating business processes and systems using critical user data across the supply chain. Some aspects of the EO will gain particular interest because of their impact on regulatory considerations. Businesses must clearly understand their responsibilities and maintain a compliant environment without disrupting operations.

Let’s take a closer look at the EO and the specific requirements it establishes.

Data Testing Disclosure Requirement: AI developers and users must now disclose how they test their AI models to ensure data protection, safety, and trustworthiness and the results of such testing. Similar to other federal laws and frameworks that emerged in the past year, such as the NIST CSF 2.0 framework and the new SEC Security reporting rule about 8K filings, this new data testing requirement puts an added burden on businesses. First, they must prove the effectiveness of their security controls, and second, they must demonstrate the posture of their systems through proactive collection and disclosure of evidence-based data. While a positive step, this requirement may also open the door to new types of fraud resulting from spoof attacks that prey on data disclosure mandates.

AI Safety and Security Board: This newly created AI authority will oversee new standards and technology to help establish security testing and measurements of AI systems, increasing the scrutiny companies are subject to in using AI. As has been the case with several cybersecurity framework amendments we’ve seen as a result of threats to the national supply chain over recent years, companies will need more dynamic control over the measurement of gaps across their estate to stay ahead of their security vulnerabilities with a remediation plan before being audited. NIST will have a hand in designing these new testing frameworks. Companies can benefit from measuring their controls by employing the new NIST CSF 2.0 framework, which has a new category (Govern) to help companies provide proper cyclical disclosure of their security risk profile and posture across the organization.

Protect Americans from AI-enabled Fraud: This aspect could be challenging for AI users and developers as it requires careful consideration to protect PII and consumer data. While other general data privacy mandates require disclosing personal or consumer data, many of those regulations are abused and have become the target of fraudulent schemes. For example, the GDPR requires companies to disclose and hand over PII and private data based on consumer rights. However, since its introduction, it has provided new opportunities for cybercriminal data request spoofs that fool businesses into unknowingly divulging private data. These privacy mandates, while well-intentioned, can give new life to many groups working on the underground, such as access brokers and data exposure culprits.

AI Tools to help find and fix vulnerabilities in critical software: This initiative is as good as the teams implementing and training the policy under which AI will seek to find system gaps and vulnerabilities. Like any other automated technology, it will depend significantly on how well the AI system has been trained on the inspection policy. Monitoring the checks and balances of tests and measurements conducted on the systems must be handled closely and carefully. Many major data breaches of the past were caused by gaps slipping into systems due to a policy configuration that was not strong enough or didn’t include a strong enough threshold to identify vulnerabilities that caused an adverse security event. Many of the recent breaches caused by 3rd party vulnerabilities were simply a case of failed security policy as opposed to a lack of security controls on the parent enterprise.

Overall, for the security assessment and regulatory community, the EO for AI could be a positive initiative and contribute to a more robust cybersecurity environment. Efforts must be taken to ensure that the policies and legislations that evolve from the order foster an environment that positively harnesses AI while considering that the frameworks implemented will require variables to protect against errors in automation and lack of human intervention.

Learn more about the future of regulatory mandates and discover their impact on business leaders. Download our 2024 Predictions Report.

You may also like

Chris Strand-Thumbnail

May 07, 2024

Enhancing Security Posture with Cyber Risk Intelligence Part 2

Read more
Two cybersecurity professionals looking at a laptop

May 01, 2024

State of the Underground 2024: Combating RisePro, Lumma, Vidar, and other top stealer malware

Read more
Manufacturing workers equipping themselves with threat intelligence

April 26, 2024

Gabi Reish speaks with manufacturing.net about threat intelligence and ransomware attacks

Read more