A comprehensive summary of the information exchanged at the Government Accountability Office’s forum on “Artificial Intelligence: Emerging Opportunities, Challenges, and Implications,” has been released by the GAO (GAO-18-142SP). Among the topics discussed by the assembled experts was the merit of linking regulation to technology, or “regtech.” One forum participant explained that “in such an alternative regulatory channel, those entities being regulated could be afforded the option to submit their regulatory data in a more transparent and real-time manner for review by regulators while reducing other reporting requirements. Implementing a data-intensive regtech approach, where data are reviewed against understood standards, would allow both regulators and those they are regulating to better understand whether desired outcomes are being achieved.”

As the NASBA Compliance Assurance Committee and Uniform Accountancy Act Committee consider revisions to the UAA and the Model Rules, predictions of future possibilities seem particularly pertinent. In 2015 the AICPA issued a concept paper on “The Future of Practice Monitoring” that would involve a technology platform joining human oversight with near real-time continuous analytic evaluation. However, the changes being considered now for the UAA and Model Rules are refinements of the current peer review program to more accurately reflect present practices.

The GAO’s report states that one forum participant “observed that technology exists to address many problems in finance, but poor regulation practices have hindered these potential gains. Regulatory structures, according to this participant, are full of gaps and are based on long-standing history and mandates rather than current practices. As a result, this participant believes that the current regulatory framework will not allow innovation and may miss negative changes that enter into the system.”

AI technologies will be developing systems that “are not only capable of adapting to new situations, but also are able to explain to users the reasoning behind these decisions.” However, “today’s machine-learning systems are black-box systems for which users are unable to understand why the system makes a specific decision or recommendation, why a decision may be in error, or how an error can be corrected. The goal of explainable AI is to develop machine-learning systems that provide an explanation for their decisions and recommendations and allow users to know when, and why, the system will succeed or fail.”

The GAO concludes its report by stating that “the testimonial evidence of experts in this engagement is not being used to develop GAO recommendations for executive-branch actions or to present matters for congressional consideration.”

Related News

Full Issue