By Jackson Johnson
•
June 30, 2025
This is an exert of the AI Accounting Playbook . Building Trust in AI Accounting As accounting firms adopt AI tools in audits, they face new questions about reliability, transparency, and compliance. Regulators like the PCAOB have made clear that if AI outputs can’t be explained or reproduced, they could violate existing standards. Yet formal guidance on AI use in audits remains limited, leaving firms unsure about how to move forward. Some firms have responded by limiting AI to non-public clients, but this caution also presents a chance to lead. Firms that build strong AI governance practices now can stay ahead of future regulation and establish trust in their use of AI. This chapter covers key compliance barriers, governance best practices, and steps to create a trusted control environment. Key Compliance Barriers Accountants face several key compliance barriers when using AI, particularly as regulators such as the PCAOB, AICPA, and SEC increase their scrutiny. Explainability One major challenge is explainability. Many AI models, especially machine learning and generative AI, don’t clearly show how they reach conclusions. This is a problem for auditors who need to support their findings. This lack of clarity makes it harder to meet audit evidence requirements, which must be sufficient, appropriate, and easy to understand, as outlined in PCAOB standard AS 1105. Poor Documentation Poor documentation is another major issue. This includes inadequate records of data inputs and outputs, training data, model logic, and controls over changes. Such deficiencies may violate documentation and risk assessment requirements, as seen when audit teams use AI for journal entry testing without documenting the rationale for flagged entries or threshold settings. Data Privacy Data privacy becomes a concern as firms use AI to handle large amounts of sensitive financial and personal information. This can lead to violations of laws like GDPR and CCPA, especially when client data is processed in cloud or third-party systems. Firms often struggle to maintain consistent policies for data classification, encryption, and access. Auditor independence may also be at risk if AI tools are built by a firm’s advisory armor are deeply integrated with a client’s systems. For instance, if both the firm and client use the same predictive AI tool for forecasting, it could lead to a self-review threat. AI Skills Gap A skills gap and overreliance on AI further complicate compliance. Many auditors lack the training needed to critically evaluate AI outputs or to recognize when human judgment should override algorithmic conclusions. This can lead to audit failures, such as misinterpreting a false negative from an AI-driven risk assessment as a clean result. Validation and Testing Testing and validating AI tools is another challenge, especially for tools that keep learning over time. Firms need to test tools when they’re first used and then on a regular basis, just like they do when relying on third-party service providers. But this is hard to do if the AI vendor doesn’t offer enough detail about how the tool works or the controls in place. Change Management Managing updates and changes to AI models is a concern. If a tool is updated or retrained without documentation, it can lead to inconsistent results. For example, a model may flag different transactions in different quarters without any clear reason why. Many firms also lack a formal AI governance plan tied to their quality management systems, which causes inconsistent control practices and unclear responsibilities. Lack of Guidance Regulators have been slow to issue formal guidance on how AI should be integrated into the audit process, leaving many firms in a state of uncertainty. The good news is that momentum is building. PCAOB Board Member Christina Ho has publicly emphasized the transformative potential of AI in auditing, particularly in automating routine tasks such as cross-referencing data, extracting key contract terms, and documenting interviews. She has advocated for the PCAOB to evolve its standards to promote responsible AI use, calling for transparency, bias mitigation, and auditability in AI tools. Similarly, the International Auditing and Assurance Standards Board (IAASB) has demonstrated its commitment to supporting firms by releasing its Technology Position, which is a strategic framework that outlines how the board will adapt auditing standards to align with emerging technologies, including AI. Until these guardrails are firmly in place, firms should proactively develop internal AI frameworks modeled on established control standards. COBIT can support firms in assessing and governing AI systems, including data and system integrity. COSO can be applied to evaluate AI governance, model risk, and internal control implications, particularly when AI impacts financial reporting or ICFR. NIST provides guidance to help firms build trustworthy AI systems and establish appropriate cyber security and governance protocols. Best Practices for Governance To use AI confidently and compliantly in accounting, especially in regulated environments like audit and assurance, firms should implement strong governance practices that align with both regulatory expectations and ethical standards. 1. Test AI Internally Before Use In Engagements Before you bring AI into your audits, you’ll need to put it through its paces. The starting point is an internal review and certification process, ideally led by your firm’s risk or national office. They should evaluate the AI tool’s design, logic, and controls, and may require your vendor to share documentation, control reports, and allow independent testing. A great way to do this is by running the AI on historical data from past audits with known results. That helps confirm whether the AI delivers the same conclusions auditors already reached. Scenario analysis is another smart move. Challenge the AI with tricky edge cases like known fraud or anomalies. This can expose blind spots or bias in the model. Be sure to maintain a complete audit trail of how the tool was tested and what controls were in place. If any issues pop up during testing, document and resolve them. And before you roll it out firm-wide, get an independent review of the tool. Think of it like a second set of eyes, similar to a concurring partner review. Only once your firm is fully confident in the tool should it be used in your accounting processes. 2. Develop AI Governance Policies Strong policies lay the foundation for responsible AI use. These should outline your standards for data inputs, risk reviews, decision-making responsibilities, and transparency. Deloitte recommends a universal governance policy that applies to all AI technologies across the firm. This policy should define acceptable (and prohibited) use cases, require approval for new AI tools, and establish review intervals. Ethical usage also needs to be a priority. That means clear guidelines around privacy, bias, and legal compliance — with transparency as a core value. Internally and externally, stakeholders should understand when and how AI is being used in order to build trust in AI usage. To oversee this, consider forming a dedicated AI GRC (Governance, Risk, Compliance) team. Roles might include a Chief AI Risk Officer, Data Protection Manager, AI Project Manager, and an AI Governance Committee. Need help building your framework? Look to proven models like NIST AI RMF and ISO 42001. COSO’s recent guide Realize the Full Potential of AI shows how to extend COSO’s ERM framework to AI, and it’s a great place to start. 3. Implement Data Quality Controls AI tools are only as reliable as the data they process. The old adage “garbage in, garbage out” underscores the importance of data quality in AI-driven accounting. To minimize the risk of inaccurate or biased AI outputs, firms should implement data validation, cleansing, and standardization processes. High-quality data improves AI performance and supports more reliable audit conclusions. Protecting sensitive data is also crucial. Firms should limit access to confidential information using role-based access controls (RBAC) and multi-factor authentication (MFA). Audit logs tracking data access provide an added layer of oversight, helping firms monitor and secure critical information. Data lifecycle management is equally important. Retention and deletion policies should be in place to ensure outdated data does not become a liability. While GDPR is an EU regulation, it sets a high standard for data management and serves as a strong benchmark for firms looking to enhance their data governance practices