Generative AI Obstacles Slow Down Tax, Accounting Transformation

Generative AI Obstacles Slow Down Tax, Accounting Transformation

Tax and accounting firms are increasingly embracing generative artificial intelligence for some of their operations, but regulatory and ethical hurdles are slowing adoption.

The list of obstacles is long—data privacy concerns, an uncertain regulatory climate, a need for skills training and better internal systems, plus the sometimes questionable information generated by AI itself. Considering that, it might be quite some time before the technology reaches anywhere near the overall $2.6 trillion annual impact on the US economy predicted in a McKinsey & Co. report.

“It’s not quite as everywhere as the hype makes it seem like it is,” said Gia Chevis, the director of Innovation in Accounting Data and Analytics at Baylor University in Waco, Texas. “But I don’t think the hype is particularly overblown for the long term.”

Big Four Pioneers

Generative AI erupted into mainstream discourse in late November 2022 with the launch of ChatGPT, the Open AI chatbot that produces content in response to conversational prompts. ChatGPT’s efficient natural language processing abilities, coupled with its research and analytical power, set off a generative AI domino effect within the tax and accounting industries.

The Big Four lead the way. Over the past few months, both Ernst & Young and KPMG have launched comprehensive training programs, Deloitteannounced a generative AI practice, and PwC said it was committing $1 billion to “expand and scale” its AI resources.

From identifying variations in statistics to classifying intricate information, firms are using the technology to tackle complex data sets, ever-present in fields like tax, audit, advisory, and compliance. In some cases, it can bring a more thorough comprehension of documents and data than humans, something that firms say will significantly enhance their services.

“The common underpinning is that it helps us do our job better by extending our reach into vast amounts of data to bring it to the surface with meaning,” said Wes Bricker, co-leader of PwC’s trust solutions.

Illustration: Jonathan Hurtarte/Bloomberg Tax

Firms like Crowe LLP are using generative AI to more efficiently onboard and train newer employees, while Intuit—which owns TurboTax—has already announced customized large language models specialized in financial topics. In anticipation, CPA.com is working to raise awareness of the technology’s impact on the accounting profession, offering training and leadership activities.

Firms are currently collaborating with major technology players like Google Cloud, Microsoft Azure OpenAI Service, Amazon Bedrock, and OpenAI to harness their generative AI capabilities. They continue to work with internal and external teams to equip systems that are carefully controlled to ensure the protection of their proprietary data and, critically, customer information.

“I fully expect to see individual organizations creating their own flavor of the tools, and then certainly, as it gets easier, the creation of niche models around very specific problems,” said Jeff Schmidt, Crowe’s chief technology officer. “But all of that would be behind the firewall, if you will, to make sure that you’re adequately protecting your information.”

There is much further to go. But it isn’t as simple as employees logging onto ChatGPT, as companies like Samsung Electronics made clear when it banned employees from the platform.

Numerous Limitations

The limitations of generative AI go far beyond “hallucinations”—when a model presents made-up information as fact—and include major data privacy concerns. ChatGPT, for example, offers users some tools to manage how data is used, but it is still a public model that learns from the prompts it receives, meaning sensitive data could be at risk. Models also have limited knowledge bases and require extensive training on real data to be of value for tax and accounting purposes.

Regulators are far behind the technology. The European Union’s proposed AI Act would be the world’s first law addressing the problems of AI; in the most optimistic scenario it could be passed by the end of the year and take full effect within a few years. In the U.S., there is no concrete plan for an overarching code of conduct, and congressional dysfunction makes any such framework unlikely for the foreseeable future. That means individual companies must navigate a fast-changing technological landscape in which some of the risks are currently unknowable.

“It may take a long time for a proper set of rules and laws to come in place, because this is so complex,” said Brian Sathianathan, co-founder of iterate.ai, a low-code AI software company. “The cat is out of the bag. Every day, there are at least five or six foundation models released in open source. It’s very hard, in an environment that is evolving so fast, to create regulation.”

It is hard enough to regulate when generative AI is only carrying out mundane tasks, but experts are concerned about what will happen as the technology develops.

“Understanding that we are letting this computer program do our inventory counts based on pictures, that’s easy to understand,” said Chevis, speaking to the perspectives of regulators. “Exercising judgment about risk profiles and how we should actually do our actions as an audit, that I think they’ll have a harder time gaining comfort around.”

As with many emerging technologies, experts also warn of an “AI Divide” that will demand new skills, inevitably leading to old jobs lost and new jobs created.

Generative AI will be able to root out more instances of corporate fraud, proponents say, with the technology quickly outpacing human auditors in spotting anomalies and red flags.

“If ultimately those with the best AI have the ability to identify frauds at a higher rate than the auditor and the profession in general, that also is a risk to the profession,” said Dane Mott, accounting analyst with the Capital Group, during a meeting of the U.S. audit regulator’s main advisory group last week.

But firms are also pouring money into training programs because, they say, the technology is meant to enhance, not replace, employees.

“That’s what we need to do to grow, retain, recruit people,” said Martin Fiore, Ernst & Young’s Americas Deputy Vice Chair – Tax. “I really believe that AI is going to empower them to be basically smarter, more efficient, more productive.”

Looking Forward

Either way, the technology must advance and regulatory guidelines must be put in place before it can fulfill its promise.

“It augments the preparation process, but doesn’t replace the need for an expert professional to reconsider and understand the analysis compiled, to evaluate the quality and thus the portability of the analysis presented,” said Bricker.

Time is the underlying factor, according to the experts. Firms need time to experiment, fail, and learn from what went wrong to ensure it does not happen again. Only then can they accordingly understand the necessary standards and regulations, which may take years to unfold.

“We have a very long, established historical pattern of thinking that the seismic shifts are overhyped and getting disillusioned with them,” said Chevis, “but then they wind up having an even bigger impact than we anticipated in the first place.”

—with assistance from Amanda Iacone