I remember in my early days of auditing, there wasn’t much thought given to the completeness and accuracy (C&A) of information. Certainly, engagement teams tied out subledgers to the GL and performed tests of details or analytics to validate information, but there wasn’t the same consciousness as there is currently in this, the information age. Today, it seems testing completeness and accuracy is a never-ending story with always more to be done.
Conscious of the growing need for relevant and reliable information, in October 2021, the PCAOB published its Staff Guidance – Insights for Auditors Evaluating Relevance and Reliability of Audit Evidence Obtained From External Sources. This guidance is predicated on AS 1105.06 – Audit Evidence which states that “Appropriateness is the measure of the quality of audit evidence, i.e., its relevance and reliability. To be appropriate, audit evidence must be both relevant and reliable in providing support for the conclusions on which the auditor's opinion is based.”
In Part 1 of our series on this topic, we wrote about important considerations in evaluating relevance and reliability. While ALL audit evidence must be both relevant and reliable, there is an important distinction between audit evidence that emanates from sources external to the company and information produced by the entity (also known as IPE). The auditing standards are more stringent when considering the relevance and reliability of IPE. AS 1105.10 goes on to specifically require that:
When using information produced by the company as audit evidence, the auditor should evaluate whether the information is sufficient and appropriate for purposes of the audit by performing procedures to:
Despite the continued focus on C&A, teams are still struggling to sufficiently test IPE for C&A. In its most recent inspection observations, the PCAOB identified issues surrounding C&A of information both within the realm of ICFR as well as in substantive testing. Let’s dig into the two potential testing approaches: controls and substantive.
Testing Controls Over C&A
Given the fact that more and more information is being generated electronically, it’s becoming increasingly difficult to test C&A without testing controls. When applying a controls approach, remember that controls are first and foremost the responsibility of management. In fact, as part of the management representation letter, management must assert to its “responsibility for the fair presentation in the financial statements of financial position, results of operations, and cash flows in conformity with generally accepted accounting principles” (AS 2805.06). As well, for controls, management must state “that management did not use the auditor's procedures performed during the audits of internal control over financial reporting or the financial statements as part of the basis for management's assessment of the effectiveness of internal control over financial reporting” (AS 2201.75).
With that in mind, when testing controls over the C&A of information, it’s important to understand the information pipeline. While the end result may be a simple report, controls over completeness and accuracy require various controls from origination to reporting. As the old adage says, “Garbage in, garbage out.”
What data matters
Before you start tracing the data through the process, you first need to think about the key data that matters. Too often, reports are considered in their entirety, but there is always specific data that is critical to what eventually shows up on the financial statements. For example, for revenue you likely need to know the product sold, the revenue recognition date (e.g., the shipment date), and the purchase price. Additional information may be critical to certain controls over that data (e.g., management review controls), but when understanding the information pipeline, it helps to narrow it down to the data that really matters, especially when testing the reliability of information used in the audit. Too often, a report is tested without addressing all the key data.
Data origination
Consider next the source of the information; where does it originate? Upstream controls are critical as they govern the initial input of information into a system. For instance, when testing an inventory aging report, the input of the initial purchase date is a critical component for calculating the aging of the inventory. These upstream controls typically involve business-process-level controls.
Systems, Interfaces and Data Warehouses
Once source data is input into a system, it’s important to understand the flow of information from input to the reporting system. Sometimes this might be one all-encompassing system, but often, there are multiple interfaces, systems and even sometimes data warehouses involved prior to the generation of a report.
For each system involved in the pipeline, management should have robust information technology general controls (ITGCs) in place to ensure the reliability of the systems. ITGCs themselves don’t provide direct assurance over data or reports, but rather, they are designed to ensure that systems are safeguarded from inappropriate access and inappropriate changes throughout the audit period. In other words, if effective ITGCs are in place, then the system should operate as designed, but that means the auditor still needs to validate the design (i.e. testing the report – we’ll get to that in the next section).
For each interface, it’s important to test the relevant controls that ensure complete and accurate transference of data between systems. Most interfaces involve processing and filtering of data. In these cases, the engagement team will need to validate what filters are in place, whether they are appropriate and how data transmission exceptions are resolved.
Finally, if there are data warehouses involved, similar to systems, it’s critical to understand the controls in place that safeguard the information while it resides in the data warehouse. Data warehouses often store data in different structures than the source system to aid in reporting. How is management comfortable that the data is completely and accurately transferred to the data warehouse?
Reports and queries
The final step in the process is understanding how the information is gathered and aggregated into the report or query. The nature of the controls and testing to be done will always depend on the type of report and output. Standard reports that generate PDFs pose less risk, for example, than customized queries that produce an Excel output.
Regardless of the nature of the report, there is always some testing to be performed. If the report is a simple standard report (e.g., a listing of information), the testing may be limited to understanding the change management controls around the report and validating that the report has in fact not been customized. If, on the other hand, it is a customized query, the engagement team should consider:
These are just some of the questions to consider. The reality is that there is no one way to test a report. The testing approach will always be unique to the company’s systems and controls and will be impacted by the nature of the information being tested.
Substantively Testing C&A
Alternatively, if engagement teams opt not to test controls over C&A, they may substantively test the C&A of IPE. When performing substantive testing without controls reliance however, it’s important to remember that teams must test the C&A each and every time they obtain a report.
Completeness
There are various ways to substantively test the completeness of information. One approach is to reconcile a report with an alternative report / source. For instance, an engagement team could reconcile an investments purchases and sales journal with bank/broker statements to ensure completeness. Some reports, such as the journal entry listing could be tested for completeness by performing an account rollforward where the opening trial balances (i.e. prior year balances) are rolled forward to the current period balances using the journal entry listing. Alternatively, some reports such as cash disbursement journals may have sequential numbering which could help validate completeness. The design of the completeness test will always depend on the nature of the information. Finally, the completeness of some reports may be validated through detailed substantive testing such as floor-to-sheet testing for physical inventory. When testing completeness of a report, it is important to distinguish between the testing performed for completeness as a financial statement assertion and completeness as an information processing objective. Sometimes, the testing can accomplish both, such as the floor-to-sheet testing for physical inventory, but often, the completeness of a report does not directly translate into the same completeness for a financial statement assertion.
Accuracy
Accuracy is less difficult a concept to grasp as this is often the nature of a test of detail. In other words, accuracy is typically validated through taking a sample of items on a report and reconciling it back to audit evidence. When testing accuracy, it’s important to test the accuracy of all key data fields.
Other Considerations
Though the standard has not changed, with experience and learning, the industry has become more conscious of C&A. Though everything falls under the realm of either testing controls or substantively testing C&A, some specific considerations include:
Key Takeways
Johnson Global Advisory
1717 K Street NW, Suite 902
Washington, D.C. 20006
USA
+1 (702) 848-7084