AI and Fake Audit Evidence - How to Mitigate Risks
The Independent Regulatory Board for Auditors (IRBA) has issued Artificial Intelligence Audit Risks Mitigation Update for auditors. The the main concern identified is AI's ability to generate realistic fake evidence, which could harm audit quality.
Risk Mitigation
To address these risks, IRBA recommends:
Having clear policies for AI use in audits.
Ensuring all AI-generated evidence is verified.
Maintaining data confidentiality to prevent unintended disclosure.
Using professional judgment when relying on AI-generated information.
Training teams and using proper tools to identify false evidence.
The Impact on Accountants
When preparing financial statements accountants may encounter the same AI-related risks impacting the accuracy and integrity of financial records that they work on. Furthermore, these risks apply to ALL financial statements compiled by accountants, not only the those that are audited or reviewed. AI's ability to create convincing but false evidence makes it important for accountants to ensure that data and documents used in financial reporting are authentic.
Actions to consider
Similarly to the suggestions above, accountants should consider:
Developing their understanding of AI tools and their potential risks.
Reviewing AI-generated information carefully before using it for financial reporting or decision-making.
Ensuring that AI systems used in their firms comply with data confidentiality standards and that any sensitive information is protected.
Consult your CIBA Chat GPT guide under your member profile Practice Tools/Guides for more information on the challenges and measures you need to be aware of when using AI.
By following these guidelines, accountants can help maintain the reliability of financial statements and support the audit and review processes effectively.