The latest wave of artificial intelligence, the chatbots connected to large language models such as ChatGPT, offer great potential productivity gains, but some organizations will avoid their use or take up the tool very, very slowly. Implementation will be very slow in sectors with high stakes, litigiousness, high unionization, and high regulation. The opposite industries will move the fastest.
AI In High Stakes Fields
The high stakes issue pervade health care, which has plenty of potential benefits from AI. Medical diagnosis requires taking in information from a patient as well as images and test results. For an older person or one with several medical conditions, the information can be voluminous. Then the patient information is cross-linked—in the doctor’s brain, usually—with numerous possibilities. The clinician considers the highest probability cause of the patient’s symptoms, but also worries about some possibilities that have low probability but high consequences. AI can handle this pretty well, as evidenced by two models passing the U.S. Medical Licensing Exam. Healthcare, however, is a high stakes endeavor. People die from mis-diagnosis and mis-treatment. Caution will triumph over innovation most of the time here.
Corporate dealmaking is another high stakes field when it comes to billion-dollar mergers and acquisitions. AI may be used to look for problems in the due diligence stage and in the final contract, but nobody will rest assured until humans have reviewed the information. Utilities will avoid risk that could shut down power or water to large numbers of people. Businesses with the potential for substantial environmental problems, such as chemical engineering, will also tread very carefully.
Lawsuits and AI
Litigiousness will also deter AI usage, and it often goes hand-in-hand with high stakes. Healthcare is both. A Harvard study found $56 billion of annual costs associated with malpractice.
Lawsuits also pose a risk in low-stakes transactions involving many people. Class action suits can hit a company that uses AI in a way that harms thousands of people, even if the harm in each case is small.
Government, Regulated Industries and AI
Government activities and highly regulated industries will also have relatively slow adoption of AI. We see the Federal Aviation Administration running antique computer systems, illustrating that governments are often slow to implement modern systems.
Regulators in general will be suspicious of new approaches, even if they are safer and less costly. Regulated companies will have to obtain approval for plans that involve AI, and that approval will likely be very slow in coming. Some of the entrenched competitors will argue for more regulation as a way of protecting themselves from more efficient upstarts.
Safety will be part of government foot-dragging on AI. That’s true even if AI-enabled decisions are safer than human choices. We’re used to human drivers causing automobile accidents, but we fear autonomous vehicles. In medicine, human mistakes are common despite high training and diligence on the part of healthcare professionals. That medical licensing exam that the AI took? The average passing score cutoff for humans is in the neighborhood of 60%. Ponder that: a new M.D. who gets 60% of the questions right is qualified to make medical decisions about a patient’s life. But we will worry about the decisions recommended by a computer.
Unions and AI
Unions will also object to AI that reduces demand for labor. The Writers Guild of America included limitations on artificial intelligence in their strike demands. Limits on automation have been common in union contracts. Unionization is greatest in the government sector, so look for future negotiations to try to protect the jobs of government clerical and administrative employees. Other more unionized sectors include the hands-on jobs in utilities and construction that have less potential for AI.
Government regulation of AI in general is a huge topic. It probably will have little impact on early adoption of AI, but could slow implementation of new developments.
Corporate Culture and AI
A final issue in the take-up rate of AI in business is tradition and culture. Company leaders who are happy to continue doing things as they always have been done will be slow to adopt new tools. That will be more likely in less competitive sectors of the economy, such as regulated utilities. Businesses that fail to take advantage of productivity improvements in competitive industries will suffer fairly quickly. So far, it appears that most business leaders are alert to the risks of slow adoption, though they also worry about the many ways that the new AI tools can err. This balancing act is appropriate.
So the slow adopters will be sectors with high stakes, litigiousness, government decisions and unionized workers. The fast adopters will be the opposite. So far the most rapid adopters are computer programmers, writers (especially free lancers), solo entrepreneurs such as consultants. But in the next year, tailored applications that apply AI to specific tasks will be much more common and in far greater usage in most sectors of the economy.
Corporate AI Policies Will Evolve
Adoption of AI, however, will be a process through time, and we have already seen changes. The IT firm Insight Enterprises commissioned a Harris Poll which found that, “81% of leaders say their company has already established, or is currently developing, internal generative AI policies.” And in the six weeks, since the poll, policies are already evolving, according to Matt Jackson, Insight’s Global Chief Technology Officer. He said in a telephone call that about half of the early policies were simply to not use ChatGPT. Increasingly he sees tolerance for careful use of chat bots, such as guidelines for protecting company data.
The initial limitations on AI usage will be gradually adjusted in other ways, he said. In high stakes fields such as medicine, the transition to AI may involve “recommendation engines,” which leave final decisions with professionals while offering suggestions. He offered perspective, saying that we are in “the first phase of a decade long journey.” That is certainly true, but as with any journey involving billions of people, some will move quickly and others very slowly.
Read the full article here