At the core of any regulation is an AI-ready workforce. While organizations are driving reskilling aggressively, there are reports of employees sometimes feeling that AI usage would make them look lazy or incompetent. Unlike traditional software systems, the workforce ready for handling Responsible AI needs to attend to both upstream and downstream ethical & legal risks. It needs to understand licensing practices around open vs. closed AI models, liability distribution among value chain participants, “use-case”-level compliances, IP issues in training datasets, ownership of AI generated outputs, synthetic data generation, caching of user inputs, regurgitation due to over-training, etc.
And all these issues are over and above the issues of safety, privacy, security, hallucinations, bias, and explainability that already dominate conversations around ethical and responsible AI. Unfortunately, legal opinions across countries are neither emerging at same pace, nor in the same direction. Thus, we are looking at a jigsaw puzzle of AI laws across major markets.
Major AI Legislation
In the U.S., for example, no AI law is expected at federal level in near future. But states have taken the lead and have passed more than two dozen AI lows to date. Four states (New Hampshire, Tennessee, Montana, and Oregon) have enacted AI legislation. Another 18 states (including California, New York, Texas, and Florida) have a mix of enacted and proposed legislation. And another 17 states have proposed legislation. Only 7 states have no proposed legislation.
In sharp contrast, we see another jigsaw puzzle across the Atlantic. The recently passed EU AI Act lays out very nuanced requirements on companies. This law is very significant, as it places penalties of up to 7% of global revenue for being non-compliant. It expects companies to classify and adhere to additional compliance for their AI use cases into (a) prohibited ones (b) high risk ones.
Misconceptions about web scraping are already causing legal frictions among organizations. Organizations also need to sensitize its user base about prompt etiquettes, especially when the AI system is fine-tuned according to user interactions that might change its behavioral patterns. For such shape-shifting AI systems, attribution of legal liability to the AI system or to the user remains an unchartered area.
Other challenges that enterprises can face in development of AI are from sector-specific regulations and lawsuits. The General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) are two good examples of where data laws are crucial in training AI models.
Within the EU and California, AI providers can collect data only with explicit consent that must be informed as to the usage of collected data. And personal privacy is only the tip of the regulatory iceberg. AI compliance must also address export controls, record-keeping, and local consumer protection regulations to ensure ethical and legal AI development.