In 2014, Denver Eater published an article: What’s in a Name: The Year of the Ampersand. Essentially, for the foodies out there, in 2014, it seemed all new, trendy restaurants used the ampersand (yes, the “&” symbol) to link two words (sometimes entirely unrelated) together and then suddenly, voilà, you had yourself the next hottest restaurant. In Denver, top of mind are Stoic & Genuine, Work & Class, Guard & Grace, and Williams & Graham, to name a few.
In an entirely different industry (arguably less exciting), the same naming trend seems to be picking up heat today. It’s almost impossible to talk about audit without using the words “completeness & accuracy” or “relevance & reliability.” Whether supporting teams on PCAOB inspections or performing in-flight reviews, these paired words seem to surface time and again. In fact, in October 2021, the PCAOB published guidance specifically addressing this topic: Staff Guidance – Insights for Auditors – Evaluating Relevance and Reliability of Audit Evidence Obtained From External Sources.
Quantity & Quality (there’s that ampersand again)
Information technology is enabling the aggregation of more and more data as well as allowing access to these vast sums of data. The more data, the more evidence and thus logic would dictate, the better the audit. However, we’re all familiar with the concept of “fake news” and so, it’s important to evaluate the quality of that information.
As with almost everything related to an audit, it all starts with risk assessment. As the risk increases, so too does the quantity and the quality of the audit evidence needed to address the risk. In its publication, the PCAOB stated:
The concepts of sufficiency and appropriateness of audit evidence are interrelated – the quantity of audit evidence needed is affected by both the risk of material misstatement (in the audit of financial statements) or the risk associated with the control (in the audit of internal control over financial reporting) and the quality of the evidence (i.e., its relevance and reliability).
Quantity is often driven by the nature, timing and extent of procedures. Quality is driven by the relevance and reliability of the information obtained or used in those procedures. Relevant information but from an unreliable source doesn’t hold much value for the auditor. Similarly, reliable information that is irrelevant renders audit evidence useless.
Factors to Consider
Relevance
The PCAOB states: The relevance of audit evidence refers to its relationship to the assertion or to the objective of the control being tested and depends on the design and timing of the audit procedure.
Essentially, how well does the evidence pertain and/or relate to the assertion being tested? For instance, if an auditor is using industry data to corroborate management’s assumptions, how comparable are the companies underlying the industry data? If you’re auditing a start-up company, pulling information from long-established Fortune 500 companies may not be relevant information.
How disaggregated is the data? Often, the more disaggregated data is, the more relevant it becomes since you can select the data that is most pertinent/similar.
Another factor when considering relevance of data is its age. Typically, current data is more relevant than data from a decade ago. However, it really all depends. Arguably, data from 2020 (the year of COVID) may not be the most relevant data to represent “typical” operations. So perhaps 2019 is more relevant than 2020. Similarly, if we were to have another global pandemic in 2030, well, then the data from 2020 (even if it’s ten years old) may be the most relevant data since it would show how companies fared during a global pandemic (which has not been a common occurrence).
Reliability
As it relates to reliability, the PCAOB states: The reliability of audit evidence depends on the source and nature of the evidence and the circumstances under which it is obtained.
Source of information
When considering the source, consider the expertise and reputation of the source of the data. We inherently do that ourselves when we read the news: headlines from NPR are generally more trusted than the sensational scandals reported by grocery store tabloids. The same concept applies to audit evidence. Factors that might increase the reliability of the source include regulatory oversight and statutory mandates and reporting. US Banks, for instance, are generally accepted as providing reliable information given the incredibly stringent regulatory environment. Finally, auditors need to consider conflicts of interest. A research study on the effects of leaded gasoline funded by the manufacturers of lead additives is a clear conflict of interest (and yes, this was the case for years when cars used leaded gasoline). Obviously, auditors need to consider the source of information and the potential relationship to the company being audited.
In addition to analyzing each source, the more sources that can be obtained, generally, the more reliable the information becomes. For instance, if a company is using a market multiple approach to determine the enterprise value of a company, the more multiples that can be obtained (assuming they’re all relevant), the more reliable the information becomes.
Nature of information
Once the source has been vetted, the auditor must consider the nature of the information being obtained. To the extent the information is “raw data” that has not been manipulated, it is considered more reliable. As data is aggregated, manipulated, and/or synthesized, the data becomes less reliable (given the increased risk of error). However, data that has been reviewed or subject to some sort of “attestation” would inherently become more reliable.
How information is obtained
Auditors should also consider how information is obtained. External data obtained directly by the auditor is more reliable than data provided by a client. Further, the more complex the process to obtain the data, the less reliable it becomes as there is greater risk of error.
Ultimately, all of these factors need to be considered in combination. And there are likely many other factors that could impact relevance and reliability. While no single factor renders information relevant or reliable on a standalone basis, I would caution that one single factor could render the information irrelevant and unreliable.
As auditors consider these factors and review data from various sources, it’s important to maintain professional skepticism. To the extent inconsistent data or contradictory evidence surfaces, the auditor needs to evaluate this; you can’t just ignore it.
Most of the time, it’ll be a matter of professional judgement, so document these considerations.
Difficulties with Relevance & Reliability
In working with teams on various audit quality initiatives, there have been a few sources of frustration that perpetually surface: availability of external data, ability to audit external data, and inconsistent application of the “guidance” above.
Availability of external data
While information technology has made it generally easier to access data, there are some companies that operate in largely “uncharted” territory. Many of my clients who audit start-up companies struggle to find “comparable data” in these emerging industries; there just isn’t any historical data, often because the other start-ups are so small and/or private and it’s a brand new product. In these cases, there just isn’t a lot of information to obtain. For the limited information that is available, auditors often struggle to conclude on the relevance and reliability of that information. For instance, if there is only one public competitor that launched a similar product ten years ago, given the guidance above (single source with ten-year old data), arguably, the information is no longer relevant. But if that is the ONLY information available, it’s the most relevant. I’ve seen these cases time and again and the best thing I can advise is to document all considerations.
Ability to audit external data
Sometimes, external data is available, but how can an auditor really assert completeness and accuracy of that information? The auditor has the ability to audit the client, but there’s no guarantee that a client has contractual rights to audit external information (say, from its customers). Take software services. I’ve worked with many clients who audit software companies. Sometimes the revenue is generated through use of the software. Sometimes the client has insight to that usage. Other times, it must rely on the customer to report usage in order to bill for revenue. Given the auditor cannot necessarily go out and audit these customers, how can the team really assert completeness and accuracy?
In its publication, the PCAOB states: …[W]e understand that some firms are considering using as audit evidence new information from nontraditional external sources that has become available because of the advances in information technology. To determine the nature and strength of any relationship between this information and the company’s transactions, and to substantiate conclusions reached, the auditor may need to perform additional procedures (e.g., correlation or regression analyses). The PCAOB seems to indicate analytics may be sufficient. The key is that “additional procedures” need to be performed. Again, this will come down to risk, professional judgment, and documentation.
Inconsistent application of the “guidance”
What is frustrating, perhaps, is the inconsistency with which the guidance seems to be applied, or the implicit expectations that have formed over time through inspection findings. Take the example above: AR confirmations (from customers) are considered best practice to obtain comfort over the existence of AR; in fact, it’s required under PCAOB auditing standards. However, in a recent audit inspection, the PCAOB challenged the use of a customer-provided list of revenue transactions indicating the team had failed to test the completeness and accuracy of that information. Why is an AR confirmation considered relevant and reliable but a list of revenue transactions from the customer not? Obviously, it’s more complicated than just that, but it seems inconsistent.
Or take another example: bank confirmations and similarly, bank statements (which include cash transaction history), are generally accepted as relevant and reliable audit evidence. It’s external data from a third party that is highly regulated. All that makes sense and is in line with the PCAOB’s recent guidance. However, let’s go to the broker-dealer industry. Talk about regulation! This industry arguably has just as much oversight and regulatory reporting as banks. Clearing firms act very similar to a bank (in fact, they often are a part of banks) except they deal in securities, which then clear in cash. Although very similar, through my experience supporting clients with broker-dealer inspections, the PCAOB appears to have different expectations asking engagement teams if they obtained a SOC 1 report (which provides reliance over the controls in place at a service provider) over the clearing broker. Why is a bank generally accepted as providing complete and accurate information without the need for controls testing but a clearing broker requires a SOC 1 report for its information to be considered C&A? Again, it’s more nuanced than that, but this also seems inconsistent.
I think this is starting to surface more and more within the PCAOB and that’s partly why they issued this guidance. It’s becoming a very hot topic.
Moral of the Story
Ultimately, the PCAOB is trying to get firms to understand that getting data is one thing, but there is still more work to be done (and documented). Sometimes, the relevance and reliability is incredibly obvious. Sometimes it’s not as clear.
Regardless the frustrations, perhaps the key is to
document the considerations to evidence that the engagement team considered the relevant factors and to capture
IN WORDS the professional judgment exercised at the time of the audit. As long as audits incorporate professional judgement, so too will PCAOB inspections incorporate professional judgment. And so, the only way to defend your position is to ensure it was documented at the time of the audit. And when you think you’ve documented enough, add more.
As is the way with any trend, the “AMPERSAND” naming convention seems to be coming to a close. Sadly, one of my favorite restaurants, Church & State in LA, closed its doors in 2019. In Denver, Beast + Bottle closed its doors during the pandemic. And so it goes. But unlike restaurants, the trend in auditing is not going to fade away. Rather, I anticipate the concepts of “completeness & accuracy” and “relevance & reliability” will only become more critical concepts.
In fact, the rise of AI and data analytics threatens to automate much of the audit profession and will disrupt the industry as we know it. Maybe my role will be obsolete in 15-20 years as traditional auditing may go by the wayside, but it will be entirely predicated on the concept that data is complete, accurate, relevant and reliable. Hopefully by then, I’ll be in retirement running a cozy little bed and breakfast which of course will be called “C&A, R&R.” To former auditors, it’ll be an homage to “Completeness & Accuracy, Relevance & Reliability,” but to everyone else, it’ll simply be known as “Cocktails & Accommodations, Rest & Relaxation.”
Dane Dowell is a Director at Johnson Global Accountancy who works with PCAOB-registered accounting firms to help them identify, develop, and implement opportunities to improve audit quality. With over 12 years of public accounting experience, he spent nearly half of his career at the PCAOB where he conducted inspections of audits and quality control. Dowell has extensive experience in audits of ICFR and has worked closely with attorneys in the PCAOB’s Division of Enforcement and Investigations. Prior to the PCAOB, he worked with asset management clients at PwC in Denver, Singapore, and Washington, DC.
Johnson Global Advisory
1717 K Street NW, Suite 902
Washington, D.C. 20006
USA
+1 (702) 848-7084