Dominique Shelton Leipzig is a Privacy & Cybersecurity Partner at Mayer Brown and leads its Global Data Innovation Practice.
The power of AI to transform our lives for the better is promising. From healthcare to education, from energy to finance and entertainment, AI is increasing productivity, improving accuracy and allowing customization of services. Consultants project that AI will add $7 trillion to our global economy in the next ten years—so much that if AI were a country, it would be the third largest country behind the U.S. and China.
With the rapid and widespread adoption of AI across industries, the need for safety governance to protect consumers is clear, yet only 28% of executives recently surveyed believed their companies were ready for AI regulation. However, the hard work has already been done for them—in my research, I’ve found that draft legislation codifying “trustworthy” AI, which exists in six continents and 37 countries, and is informed by data, has the answers for how to make AI safe and effective. Companies should consider building AI safety governance around this draft AI legislation because standards are coming, and if they wait until law is passed, it will be too late.
The Blueprint For Trustworthy AI
Concerns have been raised by CEOs, legislators and community groups that without AI governance for sensitive use cases, we risk hardcoding harms (e.g., bias, privacy and misinformation) into our global society for decades to come. To avoid this outcome, one major tech CEO called for the industry to be “proactive” rather than “reactive,” which can be easily accomplished by following existing AI frameworks.
Shaped by insights from computer science experts, the draft legislation calls for tech companies creating AI systems and their business customers licensing them to risk rank AI into categories analogous to a street light at an intersection: prohibited “red-light” applications to be avoided such as continuously monitoring people in public spaces; “green light” for low-risk applications like conversing with an AI-powered chatbot on a retailer website; and lastly, “yellow light” for high-risk AI, which is the focus of most AI governance.
Draft legislation permits high-risk AI use, but—just like crossing an intersection when the light is yellow—regulators are calling for businesses and individuals to proceed with caution. Examples of high-risk AI include use cases that could severely harm the emotional or physical well-being of individuals—e.g., AI used for health, employment, personal finances; surveillance at work or school; sensitive data (e.g., race, ethnicity, religion, political beliefs, sexual orientation, trade union membership); children; criminal justice, democracy (e.g., like the right to vote); critical infrastructure (e.g., energy grids, hospitals, food supply).
Given the importance of the “high-risk” areas, governments want companies to ensure that the AI is trained on accurate data and reflects the technical documentation of prior tests and mitigation, so if issues arise, they can be readily diagnosed and fixed. Finally, if the AI cannot be fixed to avoid harm to high-risk groups like children, the draft legislation calls for companies to have a “failsafe” or a way to stop/kill that particular AI use case.
President Biden’s Executive Order on AI follows the principles discussed above and will likely impact vendors, government benefits recipients and federal government contractors. The message is clear. Trustworthy AI is top of mind for the federal government and leaders should take the necessary measures.
Why You Should Consider Adhering To Frameworks When It Comes To AI
Traditionally, companies do not follow draft legislation, often out of fear that the regulation might change. However, key lessons from the past prove that there are times of critical importance when successful companies have adopted legislative trends before final laws go into effect. For example, one CEO likened proactive AI safety to seat belts. Companies that proactively included seat belts in their car designs prior to 1968, when it was mandated under federal law, were likely able to make their safety features a product differentiator, save millions of lives and become market leaders by being trusted. To that same effect, trusted companies are 400% more successful than their competitors.
The company with the highest market cap in the world ($3 trillion) adopted privacy protective trends early before final legislation was adopted. Other companies lost over $1.4 trillion in market cap by waiting until after the laws were adopted to implement privacy protections. Where there is consistency in draft legislation around the world, there is a growing consensus that will not reverse.
As was true for privacy and seat belts, draft AI legislation, grounded in data, provides sound recommendations for the safe use of AI and aligns business interests with societal well-being. Adhering to these frameworks can optimize great outcomes for AI and minimize harm to people.
The views expressed are Dominique Shelton Leipzig’s own and do not constitute legal advice, nor do they constitute an attorney-client relationship. They also do not represent the views of her employer, clients or any other company.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here