The acceleration of generative AI adoption is exacerbating an already fraught governance challenge. These tips can help IT leaders exercise safe and responsible adoption.
It’s rare that technology companies release software and soon thereafter admit to creators’ remorse. Yet that is exactly what has happened with generative AI, the often fascinating and sometimes frightening swath of technologies that individuals and organizations are using to boost productivity.
Perhaps remorse may be too strong a word. More like pleas for help from founders of OpenAI and other companies that have asked Congress for help regulating genAI applications proffered up to the public domain.
Yet in spite of the risks, unsanctioned and sanctioned corporate use of gen AI is growing among employees who use it to create content.
Seventy-one percent of organizations surveyed by Dell Technologies said that their employees were using genAI tools at work to create or improve writing, video, images and software code. Research, recommendations and ideation constitute other uses. Most corporate use cases will fall across marketing and sales, customer operations, software development and R&D, McKinsey estimates.
Generative AI’s Governance Challenge
Yet such use spurs new governance concerns.
Forty-five percent of U.S. executives surveyed by KPMG say that gen AI can have a negative impact on their organization’s trust if the appropriate risk management tools are not implemented. Yet only 5% of those KPMG surveyed have a mature responsible AI governance program in place, with 49% saying they intend to stand one up but have not done so yet.
As an IT leader, how will you account for gen AI tools—both those that emerge from the shadows as well as those your organization supports or even builds? These steps can help you think through ways to ensure that employees use gen AI responsibly and safeguard your organization.
Listen and Learn. You’ll survey stakeholders across business lines to learn how employees are consuming gen AI to augment their work. As an example, marketing business units might use these tools to comb through unstructured data, gaining insights about customer preferences. What are the associated risks of such scenarios?
Don’t: Put the clamps down on such technologies outright; you run the risk of alienating employees benefitting from these tools.
Build a Governance Framework. Huddle with executive leadership, risk managers, legal teams and data scientists to build consensus around how gen AI fits into the overall business strategy. Then build a governance framework that defines the roles and responsibilities of stakeholders involved in the use and/or development of gen AI tools. This will include guidelines for responsible and ethical use, security and privacy for both public and private gen AI tools. For instance, employees should not include proprietary or personal information in public gen AI tools. Pro tip: Use this NIST AI Risk Management Framework as your North Star.
Don’t: Rush out and build corporate policies without discussing with stakeholders. It’s a sure way to lose friends and alienate people.
Educate Employees. Organize regular training and awareness programs for employees to ensure that all employees are aware of safe and responsible use of genAI chatbots and other tools, while also making it clear how mastering such tools can help employees burnish their professional acumen.
Don’t: Gloss over the risks. Make team members aware of the risks of bias or disinformation in gen AI tools.
Check Your Data Architecture. Makers of gen AI tools regularly tweak and tune their large language models. If you opt to build a private version, you’ll need a data architecture that enables access to quality data from both structured and unstructured sources, according to McKinsey. Ensure that only authorized personnel have access to these models and the data used to train them. Build models whose conclusions are readily explained.
Don’t: Set up your data models and forget them. You must regularly refine your genAI models to account for data quality, bias creep and proper use.
Monitor for Responsible Use. This is the IT part of technology governance. You’ll implement behavioral analytics software to ensure responsible use of both public and private gen AI tools, as well as threat detection and incident response tools to identify and address any malicious activities. If you haven’t already, consider taking a zero-trust security stance, where you assume that no user or device can be trusted by default and that all access to gen AI models must be authenticated and authorized.
Don’t: Federal regulators are mulling safety guidelines, but you can’t afford to wait for their stewardship. Public and private models are ripe for both intentional and unintentional abuse.
The Bottom Line
Technical solutions and protocols can’t come soon enough given the current state of generative AI security. As easy as these chatbots are to consume by the masses, they may also be manipulated to generate harmful information.
As researchers recently showed, GenAI responses to prompts can be influenced by adding suffixes to retrieve potentially harmful information, such as how to build bombs or manufacture drugs. Such adversarial attacks, as they are known, essentially trick the chatbot into providing disallowed information.
If that sounds like a lot of work—on top of everything else you’re doing to modernize and transform the business—you’re not wrong. Yet it’s incumbent upon IT leaders to implement the right combination of policies and technologies to help safeguard their corporate information.
What will your playbook for governing gen AI look like?
Learn more about how to unlock better business outcomes with Dell Generative AI Solutions.
Read the full article here