Please Mind the Gap: Identifying Control Gaps in ICFR

Tourists the world over love to visit London and ride the Tube, always listening for the famous expression, “Please mind the gap” (with that terrific British accent, of course). The announcer on the train says it specifically because, indeed, there is a gap between the train and the platform and for the unaware pedestrian, a foot through the gap could result in any number of possible injuries and damages. 


Much like the Tube in London, a control gap can be a dangerous thing. While auditors and accountants don’t usually risk losing a limb, a control gap on the other hand could be an undiscovered material weakness which could allow for a potential material misstatement which could result in any number of possible damages to the public sector. 


Control gaps are hard to identify, but they matter significantly. Almost twenty years after the implementation of SOX, it’s become easy for auditors to test internal controls that are in scope (i.e. that which we see). But what if the issuer doesn’t have a control in place to cover a potential material risk? Or what if the auditor did not scope in an important control to cover a material relevant assertion? It is much harder to identify a problem we cannot see, or, in other words, a gap. 


Often, the focus of controls testing is on evaluating the design and operating effectiveness of internal controls. In fact, many of the PCAOB’s recurring findings surround firms’ failures to sufficiently test the design and operating effectiveness of controls, such as management review controls (MRCs). What is perhaps less well known is that the PCAOB also takes issue with engagement teams’ identification of controls to address deficiencies. In its Staff Preview of 2018 Inspection Observations which details recurring deficiencies, the PCAOB noted, “Auditors did not select controls for testing that address the specific risks of material misstatement.” Similarly, in its 2019 observations, the PCAOB said, “Auditors did not identify and test controls that sufficiently addressed the risks of material misstatement related to relevant assertions of certain significant accounts.” These issues translate into control gaps. 


If you read through the auditing standards, specifically AS 2201, An Audit of Internal Control Over Financial Reporting That is Integrated with an Audit of Financial Statements, you’ll notice that there is actually very little guidance around testing the design and operating effectiveness of internal controls. In fact, the first 41 paragraphs of AS 2201 (and for context, there are 98 paragraphs, not including the appendices) deal with planning the audit, understanding risk assessment, incorporating materiality, understanding the control environment, scoping significant accounts and assertions, understanding likely sources of potential misstatement, and finally, selecting controls to test. Almost half of the standard provides guidance to help auditors identify and select controls to address risks of material misstatement. Then, the PCAOB provides four paragraphs (AS 2201.42-.45) that speak to testing the design and operating effectiveness of controls. WOW! 41 paragraphs to ensure auditors select the appropriate controls and only four paragraphs to ensure auditors appropriately test controls. 


In my experience with teams, a majority of the time and energy is spent on testing internal controls and very little time is spent on analyzing the controls in scope. In fact, most teams I know often take the “same as last year” approach without critically re-assessing the relevant risks and assertions, the likely sources of potential misstatement and selecting the right controls to address those risks. Control gaps are significant and can just as easily amount to a material weakness as can an ineffective control, whether due to design or operating effectiveness. 


Given the significance of first identifying the appropriate controls and then testing those controls, consider the following: 


Data lineage and process flows 


The industry knows the importance of walkthroughs, but I have come to find that they are narrow in scope and often have become “perfunctory.” Many teams simply perform a walkthrough to understand the design of a control. But a walkthrough is actually intended to walk through a transaction from start to finish; in other words, to walk through the entire process. As controls occur (in the process), then yes, the engagement team should ask more clarifying questions to understand and evaluate the design of the specific control, but transactions don’t necessarily go from control to control. There is a process flow and teams need to understand that process in its entirety. I often advocate for the use of flow charts. If the client doesn’t have them, then the engagement team should consider creating a flow chart to help navigate the walkthrough. At each step in the process, the engagement team should ask, “what happens next?” – not, “what’s the next control?” A flow chart should map this exactly, allowing the engagement team to more easily identify potential control gaps. 


While more often used in the IT realm, there’s an important concept of data lineage. It is vital that engagement teams understand the flow of data starting with where it originates and where it ends up (i.e. eventually the general ledger). 

For instance, data that flows through multiple systems (Systems A, B and C) will need to have controls to ensure the complete and accurate transfer of information from system to system. If the engagement team only performs a walkthrough of a specific control (Control B.1), then the engagement team may conclude that the Control B.1 is appropriately designed. But without a walkthrough of the entire process, the engagement team may miss the fact that the data originates in System A and thus may need an “input control” and may also need an interface control to ensure the complete and accurate transfer of data between System A and B. In addition, without a walkthrough of the entire process, the engagement team may miss the fact that there needs to be a control to govern the complete and accurate transfer of data between System B and System C (which happens after Control B.1). These would all be control gaps that could jeopardize the relevant assertions of the account and result in a material weakness. 


Especially today, given the integration of IT systems and automation, it is important for engagement teams to perform walkthroughs of entire processes with both financial statement and IT auditors. 


Risk matrices 


Most issuers have risk and control matrices. These matrices can be burdensome given the size and amount of information included within. I encourage teams to create a simpler version on their own; these simplified matrices can be the most effective method for mapping significant accounts, risks, and likely sources of potential misstatement (also referred to as “what could go wrong” or WCGW) to specific controls. Each account will have relevant assertions. Each relevant assertion will have multiple WCGWs. And each WCGW should have at least one control that specifically addresses that risk. Though usually completed by more junior team members, managers and partners should spend a significant amount time reviewing this matrix mapping since this is the foundation for the identification and scoping of controls. 


Once scoped, it’s just a matter of testing the design and operating effectiveness. I realize that testing can take a significant amount of time as well, but generally speaking, the more time spent upfront planning an audit (including understanding and scoping controls), the better the execution of the audit. 


Errors and exceptions 


As we move into substantive testing, I encourage teams to consider errors and exceptions as these generally have control repercussions. Some errors may not be significant, such as reconciling differences between the subledger and GL due to rounding. A true error, however, often indicates a breakdown in controls. When teams find an error, consider whether the controls in place operated. If they in fact operated as designed, then either the controls are ineffectively designed or there is a control gap somewhere in the process that should address this error. Of course, take into account materiality; there may not be a risk of material misstatement, but the audit team should consider the effect of errors, the potential for material misstatement or material weakness, and document its judgments around these considerations. 


Regarding exceptions, while engagement teams are quick to explain why exceptions are not errors, consider if there are control implications. For instance, in a substantive test over revenue occurrence, I’ve seen numerous tick marks explaining why there are no shipping documents (i.e. occurrence) for a specific selection. Maybe it’s because the this particular sale is actually a service and not a shipment. Okay, point noted; I’m not challenging the validity of the revenue. However, for this specific selection, the typical revenue recognition process is not applicable and that means the client should have controls designed and in place to ensure revenue recognition for this revenue stream is in accordance with accounting guidance. Did the client and did the engagement team identify a control to cover this “exception?” Regulators are keen to identify these types of situations for potential control gaps. Again, take into account materiality; to the extent this is an immaterial revenue stream, then perhaps no controls need to be identified and tested, but the engagement team should at least document its judgment. 


When performing walkthroughs, I can’t emphasize enough the importance of asking control owners, “what happens when there’s an exception?” Or for automated processes, “is it possible to have a manual workaround?” These are potential exceptions that should have controls identified and operating to ensure there is no gap. 


“Fresh” reviews 


Finally, I encourage engagement teams to get “fresh” perspectives. While recurring year after year helps build a strong understanding of the client (which is critical to identifying potential control gaps), in an effort to drive efficiency, most audit approaches are simply rolled forward from the prior year. Thus, in lieu of re-assessing the risks and the in-scope controls meant to address the risks, teams simply adopt the prior year scoping. 


Taking a step back though, is that really the most effective or appropriate action? The initial scoping of controls is often performed either a) upon client acceptance or b) upon initial SOX implementation. 


  • Client acceptance: In a first-year audit, regardless the size of the company, there is so much “learning” that occurs that it’s almost foolish to think that the scoping of controls made in the first year is the “best” or “most appropriate” scoping. Surely the engagement team will continue to learn and better understand a client over time and therefore identify additional controls that are needed to cover relevant risks. 
  • SOX implementation: Similarly, the first year of a SOX implementation is a huge undertaking. While the controls may cover the relevant risks at the time of implementation, there are often oversights that both management and auditors realize over time and thus controls will constantly be adapting. Layer onto this the fact that clients are perpetually changing, and it’s important to critically re-assess every year the scoping of controls. 


To get fresh perspectives, consider the use of in-flight and lookback reviews or targeted ICFR gap analyses across clients to help engagement teams identify potential control gaps. It is important to have objective perspectives that can raise new insights about the scoping of controls. 


And now back to London, the mere fact that there is such a large separation between the train and platform is possibly an indication that there was a control gap somewhere in the design and construction of the London Underground. I’m not sure who first identified the error, whether it was the engineers or an injured passenger, but clearly the London Tube is aware of the issue and has implemented a control to cover this risk and it goes: “Please mind the gap.” 


About Johnson Global Advisory 

Johnson Global partners with leadership of public accounting firms, driving change to achieve the highest level of audit quality. Led by former PCAOB and SEC staff, JGA professionals are passionate and practical in their support to firms in their audit quality journey. We accelerate the opportunities to improve quality through policies, practices, and controls throughout the firm. This innovative approach harnesses technology to transform audit quality. Our team is designed to maintain a close pulse on regulatory environments around the world and incorporate solutions which navigate those standards. JGA is committed to helping the profession in amplifying quality worldwide. 


Visit www.johnson-global.com to learn more about Johnson Global. 

By Jackson Johnson September 30, 2025
With the effective date for SQMS 1 and QC 1000 fast approaching, firms of all sizes—especially small and sole practitioners—must take action to implement a system of quality management (SQM) that meets the new standards. The good news? You don’t have to start from scratch. Despite QC 1000’s implementation date deferral, the AICPA’s date hasn’t changed, and the international standards are already effective. It’s important to maintain momentum on the efforts toward implementation of all applicable standards for your firm. This article outlines 10 practical steps to help firms build their SQM. Each step includes actionable guidance and considerations for firms with limited resources, and ties into JGA’s broader thought leadership on quality management, risk assessment, and system evaluation. The 10 Steps to Build Your SQM Step 1: Establish a Project Team Form a team with the right mix of quality expertise and operational insight. For small firms, this may mean involving a manager who can grow into a leadership role or setting aside dedicated time as a sole practitioner. Recommended actions to consider: Identify internal champions with interest or experience in quality. Schedule recurring project meetings to maintain momentum. Join a peer group for support and shared learning. Step 2: Understanding and Awareness Document your firm’s business strategy, service offerings, and operational conditions. This step helps identify factors that may impact quality—such as remote work, new industries, or staff turnover. Recommended actions to consider: Conduct a strategy review with firm leadership. List recent changes in firm structure or engagement types. Use these insights to inform your risk assessment. Step 3: Assign Responsibilities Define who is accountable for the SQM. The new standards require clear delineation of ultimate and operational responsibility, including oversight of independence and monitoring. Recommended actions to consider: Assign roles based on existing responsibilities. Clarify delegation boundaries for managing partners. Document responsibilities in your quality manual. Step 4: Establish a Risk Assessment Function Design a process to identify and assess quality risks. This includes understanding conditions or events that could impact quality objectives. Recommended actions to consider: Create a risk assessment policy tailored to your firm. Use relatable examples to demystify risk factors. Leverage AICPA practice aids for structure and templates. Step 5: Perform the Initial Risk Assessment Conduct brainstorming sessions by component and document risks using the AICPA Risk Assessment Template. Include both formal and informal responses. Recommended actions to consider: Use the AICPA risk library to identify common risks. Tailor risks to your firm’s size and services. Include existing responses—even if informal—for evaluation. Step 6: Finalize the Gap Analysis Evaluate where your current responses fall short. This may include undocumented policies or areas where responses don’t fully address the risk. Recommended actions to consider: Identify gaps in governance, ethics, and technology. Determine which informal practices need formalization. Prioritize gaps based on risk severity and regulatory impact. Step 7: Implement Responses to Address the Gaps Develop policies and procedures to close gaps. Responses must be documented and operational. Recommended actions to consider: Draft policies that reflect your firm’s values and risks. Link procedures to specific quality objectives. Use existing documentation as a starting point. Step 8: Update Your Monitoring Process Move beyond peer review prep—monitoring should be continuous and system-wide. Recommended actions to consider: Assign monitoring responsibilities across the team. Incorporate testing of responses into internal inspections. Use dashboards or checklists to track progress. Step 9: Formalize Root Cause and Remediation Procedures Investigate deficiencies and document why they occurred. This step is essential for both system and engagement-level reviews. Recommended actions to consider: Conduct interviews to understand root causes. Use findings to improve policies and training. Apply remediation even if your firm only undergoes engagement reviews. Step 10: Initial Test of Design and Implementation Review documentation and walk through processes to ensure your system is operational and testable. Recommended actions to consider: Validate that each component is supported by evidence. Simulate a peer review to test your system. Confirm that objectives, risks, and responses align. Conclusion Implementing a system of quality management is not just a compliance exercise—it’s an opportunity to strengthen your firm’s foundation for audit quality, risk management, and long-term success. Whether you’re a sole practitioner or a small firm with a few partners, these 10 steps offer a scalable roadmap to meet the new standards. Ready to get started or need help refining your approach? Contact your JGA audit expert today to schedule a consultation and ensure your implementation is tailored to your firm’s needs. At Johnson Global Advisory , we support firms in selecting, implementing, and optimizing these tools to meet their unique needs. For more insights, visit our blog or contact us to learn how we can help your firm AmplifyQuality®.
By Jackson Johnson September 30, 2025
Introduction Auditing accounting estimates has long been one of the most judgment-intensive and inspection-prone areas of the audit. For smaller firms, the challenge is even greater due to limited resources and evolving regulatory expectations. At JGA , we’ve worked closely with firms navigating these complexities and have identified three critical areas where auditors can strengthen their approach and reduce risk. What’s Recurring and What’s New: Insights from PCAOB’s Latest Audit Focus The PCAOB’s recent Audit Focus¹ underscores persistent deficiencies in how auditors evaluate accounting estimates. Common issues include failure to identify significant assumptions, reliance on inquiry or simple recalculations, and inadequate testing beyond vouching to internal or external data. These recurring gaps continue to surface in inspections of smaller firms. What’s new is a sharper emphasis on critical accounting estimates—those with high uncertainty and material impact. Auditors are now expected to understand how management analyzes the sensitivity of assumptions to other likely outcomes and to incorporate that understanding into their evaluation of bias and reasonableness. Additionally, the PCAOB highlights good practices such as updating internal guidance, enhancing EQR partner reviews, and aligning audit programs with the standards. Key Takeaways and Our Recommended Action Items 1. Evaluate the Reasonableness of Significant Assumptions What the PCAOB said: The PCAOB continues to observe recurring deficiencies in how auditors evaluate significant assumptions used in accounting estimates. Common issues include failing to identify key assumptions, relying solely on inquiry or recalculations, and not assessing whether assumptions are consistent with external factors like market conditions or industry trends. Auditors are expected to evaluate assumptions both individually and in combination, and to consider management’s intent and ability to carry out specific actions when assumptions are forward-looking². JGA’s reaction: In our article “Like Making Concrete out of Jell-O”², we described the inherent difficulty of auditing estimates that are subjective, uncertain, and often based on future projections. We emphasized that auditors must go beyond surface-level validation and challenge management’s assumptions with rigor. In “An Update for Unprecedented Times”³, we noted that economic volatility has made assumption testing even more complex, requiring auditors to evaluate whether recurring assumptions still hold in today’s environment. JGA’s recommendation: Firms should implement structured assumption testing protocols that go beyond vouching. Use external data sources to validate assumptions and ensure that engagement teams document how each assumption was evaluated. Partner and EQR reviews should include a step to confirm that all significant assumptions were tested for reasonableness and consistency. 2. Develop Independent Expectations and Use Reliable Data What the PCAOB said: AS 2501 outlines three approaches to testing estimates, including developing an independent expectation. The PCAOB stresses that auditors must have a reasonable basis for their own assumptions and methods and must evaluate the relevance and reliability of third-party data. This is especially important when using unobservable inputs or when substituting auditor assumptions for those used by management². JGA’s reaction: We’ve consistently advocated independent modeling as a way to reduce bias and improve audit quality. In our earlier articles, we highlighted how auditors can use historical data, peer comparisons, and macroeconomic indicators to build independent expectations. In “An Update for Unprecedented Times”³, we emphasized that auditors must reassess models and assumptions that were previously considered reliable, especially in light of post-pandemic economic shifts. JGA’s recommendation: Firms should train engagement teams to build independent expectations using validated data sources. When using third-party data, document the evaluation of reliability per AS 1105. Consider integrating external audit methodology tools that support independent modeling and provide templates for documenting assumptions and methods. 3. Strengthen Audit Methodology and Engagement Oversight What the PCAOB said: The PCAOB highlights good practices from firms that have updated their internal guidance, audit programs, and review checklists. These updates include scoping exercises for identifying estimates subject to AS 2501, requiring EQR partners to review all significant inputs, and linking risk assessments to audit responses. These practices are especially important for smaller firms that may lack centralized oversight². JGA’s reaction: We’ve seen firsthand how firms that invest in methodology updates experience fewer inspection findings. In “Like Making Concrete out of Jell-O”², we discussed how subjective estimates—like goodwill impairments or startup valuations—require more than just technical compliance. In “An Update for Unprecedented Times”³, we noted that firms must adapt their methodologies to reflect new economic realities and ensure that recurring assumptions are still valid. JGA’s recommendation: Firms should revise their audit programs to include scoping for all types of estimates, not just those flagged as significant risks. Partner and EQR checklists should be updated to ensure comprehensive review of estimate testing. Risk assessment documentation should clearly link identified risks to specific audit responses, with traceable evidence. Conclusion Firms should assess their current audit programs and consider enhancements aligned with AS 2501. JGA offers tailored consultations to help firms implement best practices and prepare for inspections. Contact us today to schedule a review or download our latest audit quality resources. Auditing estimates doesn’t have to feel like “making concrete out of Jell-O.” With a disciplined approach to assumptions, independent analysis, and robust methodology, firms can deliver high-quality audits that stand up to regulatory scrutiny. JGA is here to help you lead with confidence. For more information, reach out to your JGA audit quality expert . Sources ¹PCAOB’s new publication Audit Focus- Auditing Accounting Estimates | PCAOB ²See our full article Auditing Estimates: Like Making Concrete out of Jell-O ³See our full article Auditing Estimates: An Update for Unprecedented Times
By Jackson Johnson September 5, 2025
The PCAOB’s Technology Innovation Alliance (TIA) Working Group released a report on using AI, data analytics, and digital signatures to improve audit quality and investor protection. It recommends standardizing documentation, adopting responsible AI, and fostering innovation. Joe Lynch , JGA Managing Director, contributed insights as a stakeholder in the TIA roundtables and panels.
By Jackson Johnson August 18, 2025
Learn how to build your firm’s quality management system on time with actionable insights from Joe Lynch , Managing Director at JGA, as featured in the Journal of Accountancy . This article outlines eight strategic steps to ensure effective and timely implementation of quality management practices for your business.
By Jackson Johnson August 8, 2025
If you feel like you’ve read this story before, you’re not alone. For the third year in a row, the PCAOB’s annual report on broker-dealer audits paints a familiar picture: high deficiency rates, recurring issues in revenue testing, and quality control systems that continue to fall short. The 2024 report marks more than a decade of inspections under the interim program - and yet, many of the same red flags remain. At JGA , we’ve tracked and provided our insights on these annual reports closely for the last several years¹. While we always take the PCAOB’s findings seriously, we also know that behind every statistic is a firm doing its best to navigate complex standards, resource constraints, and evolving expectations. That’s why Jackson Johnson , JGA President & Founding Shareholder, sat down with Tanieke Samuel , JGA Director, to unpack this year’s latest BD report - not just to highlight some noteworthy findings, but to translate them into practical guidance for firms working hard to get it right. Revenue Testing: A Familiar Story with New Implications Jackson Johnson (JJ) : The PCAOB flagged revenue testing as a recurring issue again this year - 48% of audits had deficiencies in this area. That’s consistent with 2023, but still a big jump from 34% in 2022. And the deficiencies weren’t limited to one revenue stream - they spanned commissions, advisory fees, 12b-1 fees, and more. Why do you think this continues to be such a challenge? Tanieke Samuel (TS) : ASC 606 isn’t new. The PCAOB isn’t moving the goalposts. What we’re seeing is that firms are still struggling to test revenue accurately - across all sources. In many cases, they’re not getting a solid understanding of how revenue is generated, and that’s where the breakdown starts. Whether it’s commissions, trailing fees, or advisory income, you have to understand the components and tailor your testing accordingly. And don’t overlook presentation and disclosure - if you’re lumping everything under ‘commissions’ without disaggregating or explaining the sources, that’s a red flag. Audit Committee Communications: A Missed Opportunity JJ : One of the more surprising findings this year was the uptick in deficiencies related to audit committee communications. The PCAOB cited failures to communicate the overall audit strategy, use of specialists, significant risks like related parties, and even uncorrected misstatements. These seem like foundational elements. What’s going wrong? TS : These aren’t gray areas. The standards are clear. But I think some firms are treating these communications as a formality - just rolling forward last year’s template. That’s risky. Audit committee communications should be a strategic touchpoint. You need to clearly explain your audit strategy, surface the right risks, and give the committee what they need to fulfill their oversight role. If you’re using specialists or identifying related party risks, those need to be part of the conversation - not an afterthought. JJ : This reminds me of recent Actionable Insights we issued earlier this year , where we encouraged firms to move beyond the standard AS 1301 checklist and use the PCAOB’s Spotlight as a conversation starter. Audit committees want more than compliance - they want clarity, prioritization, and meaningful dialogue. When firms treat these communications as a strategic opportunity, they not only meet the standard - they build trust and demonstrate value. Agree? TS : Absolutely. I’ve seen audit committees respond really well when firms take the time to prepare thoughtfully - bringing key issues to the forefront, previewing the discussion in advance, and even holding deep dives on emerging topics. It’s not just about what you say - it’s about how you engage. That’s what makes the difference. Journal Entry Testing: Still a Blind Spot JJ : Journal entry testing continues to be a pain point. The PCAOB noted that firms often fail to test the completeness of the population or apply meaningful fraud risk criteria. In some cases, teams just scan the listing and move on. Why is this still happening? TS : Some firms think that because they’ve tested certain entries substantively elsewhere, they don’t need to do more. But that misses the point of journal entry testing as a fraud detection tool. You have to start by asking: What fraud risks are relevant to this client? Then design your testing around those risks. Don’t just look for a keyword - think about what would actually raise a red flag in this environment. And document your rationale. That’s what separates a thoughtful procedure from a perfunctory one. JJ : Are you seeing this as a broker-dealer-specific issue, or is it just as prevalent in issuer audits? TS : It’s definitely broader. JJ : Across issuers and BD inspections, we’ve seen comment forms where teams selected journal entries based on high-dollar thresholds or year-end timing but didn’t tie those selections back to the fraud risk discussion. In one case, the team documented their criteria but didn’t evaluate whether the entries actually addressed the override risk they had identified. In another, the team selected entries from a non-representative population and didn’t test completeness. What specifically in BD-only firms are you seeing? TS : In BD-only firms, especially smaller ones, the journal entry population might be smaller, which can give a false sense of simplicity. That can lead to shortcuts - like scanning instead of testing. In issuer audits, the volume and complexity might be higher, but the same root issue shows up: teams not linking their procedures back to the fraud risk assessment. Whether it’s a BD or an issuer, the key is to critically evaluate your criteria and make sure your testing is responsive to the risks you’ve identified. QC 1000: Turning Insight Into Action JJ : With QC 1000 going into effect at the end of the year, firms have a real opportunity to use the PCAOB’s findings as a risk assessment tool. But it’s not just about checking a compliance box - it’s about using these findings to inform a more thoughtful, iterative approach to quality. For example, we’ve emphasized the importance of root cause analysis (RCA) as a foundation for risk assessment. TS : Exactly. I emphasize to clients that RCA helps firms move beyond surface-level fixes and identify systemic issues that may be contributing to recurring deficiencies. When firms use PCAOB findings as inputs into their RCA process, they’re not just reacting - they’re proactively identifying where their system might be vulnerable. RCA helps connect the dots between what went wrong and why it happened, which is essential for designing controls that actually work. It’s not just about fixing the symptom - it’s about addressing the underlying condition. JJ : As firms read this report and try to make sense of how to incorporate it into their QC 1000 implementation, how should they approach this? TS : I would say incorporating the observations from this report and reflecting the applicability to your own practice is the concept of continuous improvement. This is a foundational concept of QC 1000. Implementation is about more than policies - it’s about culture . It’s about how you learn from what’s happening in the field and apply it to how you manage risk across the firm. When risk assessment becomes a living process - not a one-time exercise - firms are better positioned to adapt, improve, and ultimately deliver higher-quality audits. That’s the mindset shift we’re encouraging. ¹See our Actionable Insights on the PCAOB’s annual broker-dealer inspection reports from each year by entering “broker-dealer” on the search bar of the JGA Advisor page on our website. At Johnson Global Advisory, we support firms in selecting, implementing, and optimizing these tools to meet their unique needs. For more insights, visit our blog or contact us to learn how we can help your firm AmplifyQuality®. For more information, reach out to your JGA audit quality expert .
By Jackson Johnson July 30, 2025
Introduction In today’s regulatory climate, audit firms must take a fresh look at how they evaluate engagement acceptance and client continuance. The stakes have never been higher. With the PCAOB’s newly adopted QC 1000 standard and the AICPA’s SQMS 1 framework now in effect , firms are expected to demonstrate a more rigorous, risk-based approach to quality control—starting with the very first decision: "Should we take this engagement?" The PCAOB recently released a new Audit Focus: Engagement Acceptance on this topic (Audit Focus). At the same time, we’ve been speaking, writing, and helping firms improve their process in this area. On the steps of PCAOB’s recent and timely guidance, this article explores the evolving risk landscape and offers practical guidance for firms to strengthen their engagement acceptance protocols in line with new regulatory expectations and JGA’s quality management insights. The New Risk Landscape: What QC 1000 and SQMS 1 Require The PCAOB’s QC 1000 standard introduces a scalable, risk-based framework that applies to all firms performing PCAOB engagements. It emphasizes that engagement acceptance is not just a procedural checkpoint, it’s a critical quality control decision that must reflect the firm’s risk profile, independence safeguards, and capacity to deliver a high-quality audit. Key risks highlighted in QC 1000 include: Independence and ethics violations: Firms must have systems to identify and escalate potential conflicts, including automated tracking of financial interests. Monitoring of in-process engagements: Firms are expected to assess quality risks before and during engagements, not just after the fact. Scalability and oversight: Larger firms face enhanced requirements, including external oversight and formal complaint tracking mechanisms. Similarly, SQMS 1 requires firms to design and implement a system of quality management that includes robust procedures for engagement acceptance and continuance. These procedures must consider: integrity and reputation of the client firm competence and resources ethical and legal requirements, and risks to audit quality and compliance. Issues arising from poor or inconsistent client or engagement acceptance policies and procedures isn’t new, but is being looked at in new ways by firms and their regulators with the: decrease in public company auditors qualified or going to market on conducting public company audits increasing number of firms that have been stripped of their privilege to conduct public company audits, and movement of companies to different auditors (think BF Borgers as the most egregious example, but your typical attrition in the most common case). The PCAOB, AICPA, and other regulators around the world, will take these business risks and apply them in a new lens in their inspection, peer review, and enforcement processes as they look at how firms have identified and addressed risks when implementing their QC system when it comes to client acceptance. Improving Communications: Predecessor Auditors & Audit Committees Recent PCAOB inspection findings and the Audit Focus document emphasize that engagement acceptance decisions are under increasing scrutiny. Deficiencies in areas like AS 1301 (Communications with Audit Committees) and AS 2610 (Successor Auditor Communications) often stem from weak or incomplete risk assessments at the outset of the engagement. Firms must be prepared to engage in transparent, candid conversations with audit committees, especially when the going gets tough. Whether it’s disclosing an unanticipated CAM , identifying a material weakness in internal control , or explaining a shift in audit scope, the ability to communicate openly and credibly is a hallmark of audit quality. Similarly, in our article on audit committees , we emphasized that audit committees are becoming more sophisticated and assertive. They expect auditors to be proactive, risk-aware, and ready to explain their judgments—not just their procedures. The Audit Focus does a great job of asking questions for firms to consider in assessing the quality of both management and the AC. As part of your engagement acceptance process, assess not only the technical risks of the engagement, but also the firm’s ability to maintain transparency and trust with the audit committee. Ask: Will we be able to have frank conversations with this client’s governance team? Are we prepared to deliver difficult messages if needed? Do we have the right people and protocols in place to support those conversations Internal Inspections: Engagement Acceptance as a Root Cause The Audit Focus also highlights how engagement acceptance decisions can directly impact audit quality and inspection outcomes. We encourage firms to examine their internal inspection programs to see how/whether outcomes can inform or rise to potential root causes targeting the firm’s engagement/client acceptance process. For example, a risk-based selection for the annual internal inspection process should include certain jobs tied specifically to new client and new engagements:
By Jackson Johnson July 15, 2025
Introduction As explored in previous JGA Advisor articles, the implementation of quality management standards such as ISQM 1, SQMS 1, and QC 1000 has reshaped how audit firms approach compliance, risk, and continuous improvement. These standards demand a proactive, risk-based, and firm-wide system of quality management (SoQM) that is both scalable and adaptable to local jurisdictions. We have seen through our work with firms that a tech solution is just part of the equation. Of course, having the right human capital with the capacity, drive, skills, and leadership to influence implementation across so many functions of the firm is critical. Also, understanding a baseline of risks and controls – beyond the minimum explained in the standards – will go a long way for smoother implementation. We recommend taking a look at the AICPA Practice Aid and many other AICPA resources for firms embarking on their implementation journey. While the standards themselves are rigorous, the complexity of implementation—especially across multiple jurisdictions—has led many firms to look to ways to document their system with reliable workflows in a database or other system. What we have seen is that – at a minimum – an excel solution, especially coupled with other tools like smart sheets, is the easiest entry point for a tech solution for implementation. Other more advanced tools not only streamline compliance but also enhance documentation, accountability, and real-time monitoring. In this article, we explore how three platforms—Inflo, Caseware, and QMCore—are helping firms meet these challenges and elevate their quality management systems. Why Software Matters for Quality Management Successfully implementing a SoQM under ISQM 1, SQMS 1, QC 1000, or other jurisdictional standards requires more than policies and procedures—it requires leadership, training, communication, and a culture of quality. But most importantly, it requires technology. Software platforms like QMCore, Inflo, and Caseware offer firms the ability to: Assign and track ownership of quality tasks across the firm, ensuring accountability, and transparency. Streamline risk assessment, monitoring, and remediation, which are core to all modern quality management standards. Provide real-time reporting and dashboards that allow leadership to monitor compliance and identify deficiencies early. Adapt to evolving regulatory requirements across jurisdictions, including CSQM 1 (Canada), SSQM 1 (Singapore), ASQM 1 (Australia), and PES 3 (South Africa). Educate and enable staff through embedded guidance, links to standards, and intuitive workflows. For firms evaluating whether to adopt software, the key considerations should include: scalability, jurisdictional adaptability, ease of implementation, audit trail integrity, and the ability to evolve with regulatory changes. We strongly suggest taking a look at our previous guidance on adoption of software audit tools as well. There are other applications being developed for the market as well. Inflo: A Centralized Platform for Quality Management Oversight Inflo’s Quality Management solution is designed to help firms implement and maintain a System of Quality Management (SoQM) that aligns with ISQM 1 and other global standards. Unlike traditional tools that focus solely on audit execution, Inflo’s platform provides a centralized environment for managing the entire quality lifecycle—from risk assessment to monitoring and remediation. Key Features of Inflo’s Quality Management Platform: Centralized Oversight: Inflo consolidates all quality management activities into a single platform, giving firm leadership real-time visibility into the status of quality objectives, risks, and responses. Customizable Risk Assessment: Firms can tailor their risk identification and assessment processes to reflect their unique service lines, geographies, and regulatory environments. Automated Monitoring & Remediation: Inflo streamlines the tracking of deficiencies and corrective actions, ensuring that issues are addressed promptly and transparently. Evidence of Compliance: The platform maintains a complete audit trail of all quality management activities, supporting both internal reviews and external inspections. Scalable Across Jurisdictions: Inflo’s solution is adaptable to various regulatory frameworks, making it suitable for firms operating in multiple countries or under different standard-setting bodies. By integrating quality management into a digital workflow, Inflo helps firms move beyond static documentation and toward a dynamic, data-driven approach to compliance and continuous improvement. Caseware: Integrated Methodology and Real-Time Collaboration Caseware’s cloud-based platform, particularly through its Dynamic Audit Solution (DAS), offers a comprehensive approach to quality management. Built in collaboration with CPA.com and the AICPA, Caseware provides: End-to-End Audit Workflow: Integrating methodology, workpapers, and execution tools in a single environment. Real-Time Collaboration: Enabling teams to work simultaneously on engagements, improving efficiency and reducing version control issues. Data-Driven Risk Assessment: Supporting a risk-focused audit approach aligned with ISQM 1 and SQMS 1. Caseware is especially effective for firms embedding quality management into daily audit operations while maintaining compliance with evolving standards. QMCore (FinReg): Purpose-Built for Global Quality Management Standards QMCore, developed by FinReg, is a purpose-built platform designed to help firms implement and maintain a System of Quality Management (SoQM) in compliance with ISQM 1, SQMS 1, QC 1000, and their global counterparts. It is powered by the FinReg GRC platform and has received technology accreditation from the ICAEW. Key Benefits of QMCore: Comprehensive Coverage: Seamlessly integrates all eight components of ISQM 1 and SQMS 1, including governance, risk assessment, monitoring, and remediation Task Ownership and Accountability: Allows firms to assign responsibilities clearly and track progress with ease Monitoring & Remediation: Embedded tools provide high visibility into deficiencies and corrective actions, with real-time dashboards and drill-down analytics Jurisdictional Flexibility: Adaptable to regional standards such as CSQM 1, SSQM 1, ASQM 1, and PES 3 Audit Trail Integrity: Tracks all inputs and changes, ensuring transparency and defensibility; and User Enablement: Educates staff on the standards, enables them to act, and evidences compliance through structured workflows and embedded guidance. QMCore is securely hosted on AWS and accessed via the internet, making it easy to implement and scale across firms of varying sizes and geographies. Conclusion The shift to modern quality management standards is not just a compliance exercise—it’s an opportunity to enhance audit quality, improve operational efficiency, and build a culture of continuous improvement. Software platforms like Inflo, Caseware, and QMCore are proving essential in helping firms navigate this transformation. Other players may be entering the market, and we encourage a discussion to understand the latest and compare benefits and what’s best for your firm. At Johnson Global Advisory, we support firms in selecting, implementing, and optimizing these tools to meet their unique needs. For more insights, visit our blog or contact us to learn how we can help your firm AmplifyQuality®. For more information, please contact your JGA audit quality expert .
By Jackson Johnson June 30, 2025
This is an exert of the AI Accounting Playbook . Building Trust in AI Accounting As accounting firms adopt AI tools in audits, they face new questions about reliability, transparency, and compliance. Regulators like the PCAOB have made clear that if AI outputs can’t be explained or reproduced, they could violate existing standards. Yet formal guidance on AI use in audits remains limited, leaving firms unsure about how to move forward. Some firms have responded by limiting AI to non-public clients, but this caution also presents a chance to lead. Firms that build strong AI governance practices now can stay ahead of future regulation and establish trust in their use of AI. This chapter covers key compliance barriers, governance best practices, and steps to create a trusted control environment. Key Compliance Barriers Accountants face several key compliance barriers when using AI, particularly as regulators such as the PCAOB, AICPA, and SEC increase their scrutiny. Explainability One major challenge is explainability. Many AI models, especially machine learning and generative AI, don’t clearly show how they reach conclusions. This is a problem for auditors who need to support their findings. This lack of clarity makes it harder to meet audit evidence requirements, which must be sufficient, appropriate, and easy to understand, as outlined in PCAOB standard AS 1105. Poor Documentation Poor documentation is another major issue. This includes inadequate records of data inputs and outputs, training data, model logic, and controls over changes. Such deficiencies may violate documentation and risk assessment requirements, as seen when audit teams use AI for journal entry testing without documenting the rationale for flagged entries or threshold settings. Data Privacy Data privacy becomes a concern as firms use AI to handle large amounts of sensitive financial and personal information. This can lead to violations of laws like GDPR and CCPA, especially when client data is processed in cloud or third-party systems. Firms often struggle to maintain consistent policies for data classification, encryption, and access. Auditor independence may also be at risk if AI tools are built by a firm’s advisory armor are deeply integrated with a client’s systems. For instance, if both the firm and client use the same predictive AI tool for forecasting, it could lead to a self-review threat. AI Skills Gap A skills gap and overreliance on AI further complicate compliance. Many auditors lack the training needed to critically evaluate AI outputs or to recognize when human judgment should override algorithmic conclusions. This can lead to audit failures, such as misinterpreting a false negative from an AI-driven risk assessment as a clean result. Validation and Testing Testing and validating AI tools is another challenge, especially for tools that keep learning over time. Firms need to test tools when they’re first used and then on a regular basis, just like they do when relying on third-party service providers. But this is hard to do if the AI vendor doesn’t offer enough detail about how the tool works or the controls in place. Change Management Managing updates and changes to AI models is a concern. If a tool is updated or retrained without documentation, it can lead to inconsistent results. For example, a model may flag different transactions in different quarters without any clear reason why. Many firms also lack a formal AI governance plan tied to their quality management systems, which causes inconsistent control practices and unclear responsibilities. Lack of Guidance Regulators have been slow to issue formal guidance on how AI should be integrated into the audit process, leaving many firms in a state of uncertainty. The good news is that momentum is building. PCAOB Board Member Christina Ho has publicly emphasized the transformative potential of AI in auditing, particularly in automating routine tasks such as cross-referencing data, extracting key contract terms, and documenting interviews. She has advocated for the PCAOB to evolve its standards to promote responsible AI use, calling for transparency, bias mitigation, and auditability in AI tools. Similarly, the International Auditing and Assurance Standards Board (IAASB) has demonstrated its commitment to supporting firms by releasing its Technology Position, which is a strategic framework that outlines how the board will adapt auditing standards to align with emerging technologies, including AI. Until these guardrails are firmly in place, firms should proactively develop internal AI frameworks modeled on established control standards. COBIT can support firms in assessing and governing AI systems, including data and system integrity. COSO can be applied to evaluate AI governance, model risk, and internal control implications, particularly when AI impacts financial reporting or ICFR. NIST provides guidance to help firms build trustworthy AI systems and establish appropriate cyber security and governance protocols. Best Practices for Governance To use AI confidently and compliantly in accounting, especially in regulated environments like audit and assurance, firms should implement strong governance practices that align with both regulatory expectations and ethical standards. 1. Test AI Internally Before Use In Engagements Before you bring AI into your audits, you’ll need to put it through its paces. The starting point is an internal review and certification process, ideally led by your firm’s risk or national office. They should evaluate the AI tool’s design, logic, and controls, and may require your vendor to share documentation, control reports, and allow independent testing. A great way to do this is by running the AI on historical data from past audits with known results. That helps confirm whether the AI delivers the same conclusions auditors already reached. Scenario analysis is another smart move. Challenge the AI with tricky edge cases like known fraud or anomalies. This can expose blind spots or bias in the model. Be sure to maintain a complete audit trail of how the tool was tested and what controls were in place. If any issues pop up during testing, document and resolve them. And before you roll it out firm-wide, get an independent review of the tool. Think of it like a second set of eyes, similar to a concurring partner review. Only once your firm is fully confident in the tool should it be used in your accounting processes. 2. Develop AI Governance Policies Strong policies lay the foundation for responsible AI use. These should outline your standards for data inputs, risk reviews, decision-making responsibilities, and transparency. Deloitte recommends a universal governance policy that applies to all AI technologies across the firm. This policy should define acceptable (and prohibited) use cases, require approval for new AI tools, and establish review intervals. Ethical usage also needs to be a priority. That means clear guidelines around privacy, bias, and legal compliance — with transparency as a core value. Internally and externally, stakeholders should understand when and how AI is being used in order to build trust in AI usage. To oversee this, consider forming a dedicated AI GRC (Governance, Risk, Compliance) team. Roles might include a Chief AI Risk Officer, Data Protection Manager, AI Project Manager, and an AI Governance Committee. Need help building your framework? Look to proven models like NIST AI RMF and ISO 42001. COSO’s recent guide Realize the Full Potential of AI shows how to extend COSO’s ERM framework to AI, and it’s a great place to start. 3. Implement Data Quality Controls AI tools are only as reliable as the data they process. The old adage “garbage in, garbage out” underscores the importance of data quality in AI-driven accounting. To minimize the risk of inaccurate or biased AI outputs, firms should implement data validation, cleansing, and standardization processes. High-quality data improves AI performance and supports more reliable audit conclusions. Protecting sensitive data is also crucial. Firms should limit access to confidential information using role-based access controls (RBAC) and multi-factor authentication (MFA). Audit logs tracking data access provide an added layer of oversight, helping firms monitor and secure critical information. Data lifecycle management is equally important. Retention and deletion policies should be in place to ensure outdated data does not become a liability. While GDPR is an EU regulation, it sets a high standard for data management and serves as a strong benchmark for firms looking to enhance their data governance practices
May 28, 2025
WASHINGTON, D.C.: Johnson Global is proud to announce our first charitable contribution in support of the daughters of the American Revolution (DAR) —a historic nonprofit organization founded in 1890 and dedicated to historic preservation, education, and patriotism. With over 130 years of tradition and more than one million members since its founding, the DAR continues to make a meaningful impact through local, national, and global initiatives. "We are honored to support an organization whose enduring mission aligns with our values and commitment to community" said Jackson Johnson, JGA President. "This partnership marks a significant milestone for Johnson Global Advisory as we expand our philanthropic efforts and invest in organizations creating lasting, positive change". "Thank you JGA for this impactful donation will allow our chapter to continue our mission" said Jill Mathieu, Regent of DAR. To explore more about the impact of DAR, visit: www.dar.org/discover About Johnson Global Advisory Johnson Global partners with leadership of public accounting firms, driving change to achieve the highest level of audit quality. Led by former PCAOB and SEC staff, JGA professionals are passionate and practical in their support to firms in their audit quality journey. We accelerate the opportunities to improve quality through policies, practices, and controls throughout the firm. This innovative approach harnesses technology to transform audit quality. Our team is designed to maintain a close pulse on regulatory environments around the world and incorporate solutions which navigate those standards. JGA is committed to helping the profession in amplifying quality worldwide. Visit www.johnson-global.com to learn more about Johnson Global.
May 28, 2025
Johnson Global Advisory ("JGA") is proud to announce that Joe Lynch, Shareholder and Managing Director, will be speaking on a panel at the 40th Midyear SEC Reporting & FASB Forum . Joe will deliver the PCAOB update on June 6, with attendance available both in person and virtually. This panel will summarize the activities of the PCAOB including: • Understand the current regulatory landscape and emerging issues under new SEC leadership • Summarize rulemaking from the FASB’s technical agenda, including segment reporting and disaggregation of income statement expenses • Anticipate accounting and reporting issues incurred with income taxes, including ASU 2023-09 “Improvements to Income Tax Disclosures” • Identify changes from the FASB on accounting for financial instruments • Prepare for disclosure requirements on ESG and climate change, including the EU’s Corporate Sustainability Reporting Directive (CSRD), the requirements of California’s ESG disclosures legislation and the status of the SEC final rule • Recall recent developments and the most frequent comment areas in the SEC review process Click here to register and learn more. About Johnson Global Advisory Johnson Global partners with leadership of public accounting firms, driving change to achieve the highest level of audit quality. Led by former PCAOB and SEC staff, JGA professionals are passionate and practical in their support to firms in their audit quality journey. We accelerate the opportunities to improve quality through policies, practices, and controls throughout the firm. This innovative approach harnesses technology to transform audit quality. Our team is designed to maintain a close pulse on regulatory environments around the world and incorporate solutions which navigate those standards. JGA is committed to helping the profession in amplifying quality worldwide. Visit www.johnson-global.com to learn more about Johnson Global.