What PwC’s experts think about ChatGPT and AI
As part of our recent article looking at ChatGPT and AI’s effect on the future of accounting, we reached out to experts at PwC South Africa. This is the unabridged version of their email response which we’re publishing in the hope that it provides you with further insight.
PwC’s Responding Team
There is a prevailing opinion among many professionals that certain higher-level tasks cannot easily be replaced by AI. For example, FP&A among accountants or offering tax opinions or tax structures.
Given what we've seen of GPT-4s capabilities, what do you make of AIs applications in what we previously thought of as higher level or specialised fields. Is it possible that professionals are being myopic about its implications?
Mark Allderman’s response
We've been closely monitoring the evolution of GPT models and we feel that it is imperative for professional services firms to consider the potential impact that current and future AI models may have on their business and the industries that they serve and that depend on them. Our curiosity heightened following the release of GPT-4 from OpenAI, as it presents a major leap from GPT-3, and in such a short time after the launch of ChatGPT itself.
We feel that the fundamental nature of the work we do will undergo changes, though not immediate (as shown in the history of disruptive technology adoption), these changes may be irreversible and long-term, at the moment we’re “testing” capabilities of models like GPT4, and in many cases it’s bringing forth the “wow” factor, the next phase of adoption includes deeper training and people using GPT models to do things that existing methods take longer to do or do not produce as complete of a result, bringing forth major time efficiencies - we believe it is at this point, that we feel GPT models will be massively adopted in tasks we perform regularly.
Notwithstanding the incredible power of GPT-4 and potentially progressive GPT models in the near future, we feel there will exist a gap between what GPT models can do versus what human specialists can do based on their many years of experience, people interactions and real lessons from real failures which lead to sustainable success.
In essence, we believe that human-guided intuition may not be able to be replaced in at least the short-to-medium term by GPT models. We can't exactly comment on whether the professional community is being myopic, but we can provide an opinion that this technology needs to be fully understood and adopted in a governed, paced manner whilst fully considering its ethical impact along the way and disruption it may cause a workforce. It’s up to each professional and/or professional services firm to invest in unpacking, understanding and taking a deliberate position when it comes to GPT models and the impact they may have on the work they produce and consume.
Is there something of an arms race among large audit and advisory firms? Has the number of people looking at AI as part of their KPIs increased in the last five years? -
Riaan Singh’s response:
We won't be able to respond to the first part of this question as-is, but can respond to one that can be reframed as "Do you believe that large audit and advisory firms are racing to keep competitive?"
Our findings of the improving capabilities of GPT models do initially show us that it can start providing unprecedented support in the aim of research and deep-diving on topics (which still need to be fact-checked), a natural effect of this is in the short-term is what we call hyper-productivity, enabling users to save what appears to be a significant amount of time that can then be re-allocated to performing more strategic and as previously mentioned, experienced-based-intuitive tasks.
We have embraced digital disruption and maintain a culture of digital upskilling across the firm and have integrated progress thereof into KPIs, we often mandate particular technologies for all to learn, test and apply in their daily work. We envisage including GPT models as part of our territory-wide learning and development goals for every employee once we iron out the full scope of benefits, limitations and associated risks.
We embrace our ADAPT framework to ensure that we address and digest any new relevant sustainable technology disruption in the market, our framework ensures that we keep ourselves up to date, accountable and as a valuable contributor to the societies that sustain us.
According to SARS, there are around 25 000 tax practitioners in South Africa. If the tax base stays constant, in five to ten years' time, will there be a need for this many practitioners if LLM or other AI ramps up the efficiency of doing tax returns/compliance?
Mohil Subban’s response:
Disruption to any industry due to emerging technologies is difficult to predict as this is due to many factors. We can advise that any sanctioned use of a LLM/GPT model in reference to legal and regulatory use cases needs carefully designed, constructed, secure and thoroughly tested platforms.
The rate at which these platforms gain maturity may directly affect efficiencies gained when using them, again we need to stress that these platforms need to be approved for use by the appropriate regulatory bodies with relevant guardrails in place to prevent error, bias and misuse. It must be further iterated that LLM/GPT models may not be able to execute specialised tax review or advisory functions, these will most likely be left to specialists to handle. AI models may be able to help accelerate the first draft, but we feel that specialist review may always need to be performed at least whilst this technology is maturing.
We've seen the presence in SA for many years of automated tax tools like TaxTim, but these have been aimed at individual taxpayers, not corporate legal entities, to expedite data capture and direct filing of tax returns into the SARS e-filing platform. The tools to date have not attempted to make decisions or classifications of data based on a set of existing rules (e.g. the Income Tax or VAT Act together with precedent).
We expect that LLM models will be enhanced to include up to date tax precedent and legislation, and will likely be used for accelerated first draft opinions, but in our view, complex tax affairs will still require a tax professional’s review.
Are you aware of staff using ChatGPT as an equivalent for an API integration as part of their day-to-day tasks?
Mark Allderman’s response:
Globally, we have cautioned our workforce when using ChatGPT or GPT models via APIs, we have taken a risk-sensitive approach to learning and understanding the benefits and limitations of this technology. At this stage, the use of ChatGPT and similar tools are not permitted in our delivery of audit engagements or other assurance services. For any other use by PwC employees, it is important that no proprietary or client confidential information is input into ChatGPT and ChatGPT responses are not relied on nor are they directly incorporated into PwC or client deliverables. We have clearly instructed our workforce not to share any confidential, operational or client sensitive information with the models at this stage.
We’re now in the purposeful phase of identifying use cases to pursue, at PwC we incentivise innovation and embrace our ADAPT framework and The New Equation, these committed initiatives talk to our DNA of growing through embracing positive change and being able to be agile. Our staff are incentivised to re-imagine the possible as we try our best to solve important problems and build trust in society.
Is PwC building LLM tools into their audit risk identification methods?
Mohil Subban’s response:
Currently, the use of ChatGPT and similar tools are not permitted to be used in any form on audit engagements or other assurance services. We are still understanding the potential benefits as well as the inherent limitations and risks associated with these progressive GPT models. As we are committed to absolute quality and trust in our assurance services, we will only provision tools for operational use once they satisfy our audit standards and operational best practices.
What advice, if any, do you have for tax and finance professionals who want to use AI chatbots in their work? What sort of prompts or prompting strategy works best? What are some of the pitfalls, issues to be aware of in terms of bias, mistakes, copyright? -
Riaan Singh’s response:
GPT models may have inherent issues for some time and this includes the potential to produce non-factual information, experience "hallucinations" and have bias. Our advice would be to treat GPT model interactions in a similar way to the results of an internet search - you should verify the output and not fully rely on what has been generated - it may be an advanced starting point, but not by any means is it a shortcut to an end-point. We also advise not to share any sensitive or confidential information with publicly available GPT models/APIs that connect to them.
Prompt engineering may be an emerging technical field in the area of GPT models; we've seen that changing the way you prompt a model will affect its response, and time needs to be spent to understand which kinds of prompts work best for a given set of scenarios. Depending on the regulations affecting your field, be aware of copyright and compliance laws when publishing or using GPT-generated information.
Do you have any insight into what LLM tools like ChatGPT will mean for the accounting and auditing world once they can be deployed on user-specific data, and when they're more easily able to integrate image viewing integration?
Mohil Subban’s response:
The potential for disruption within the accounting and auditing fields will be as a result of many moving parts, some of these parts include technical engineering and some include alignment to laws, best practice and ethics. Setting up a secured version of a GPT model could yield many benefits; it would depend on how the system is architected and how easy it is to learn, use and adopt into business-as-usual tasks. As the image-input capability has just been released with the announcement of GPT4, it's unclear how this will benefit the field, as it will depend on how it is designed within specific workflows that start or contain images or paper-based documentation.
In summary, we do believe that while there may be massive impacts felt by many different industries, we caution users to be wary of relying too heavily on this progressive technology as it still needs to mature and be industrialised, secured, tested and sanctioned by the right people so it can do the right thing at the right time, in the right context and in the right way.