Could AI help marshal regulatory refinement and drive progress?

This is a linkpost for https://​​aurilio.substack.com/​​p/​​could-ai-help-marshal-regulatory?s=w

In order to remain competitive, the US must prioritize regulatory maintenance, cleanup, and refinement. Each of the last four presidents has issued criteria for regulatory maintenance; President Biden’s “Modernizing Regulatory Review” Memorandum breaks from the trend of establishing regulatory agendas through Executive Order, but does so in name more than function.

Despite this ongoing attention to regulatory maintenance and refinement, accumulation seems to persist at will. As economist Patrick McLaughlin put it in a 2013 written testimony before the Senate Judiciary Committee: “In 2012, the Code of Federal Regulations—the series of books that contains all regulations in effect at the time of printing—contained over 170 thousand pages of dense legal text with over one million restrictions, the result of the accumulation of regulations over decades and decades of reactive governance.”1 McLaughlin also co-authored a 2016 study which found that regulations may have reduced potential GDP by 25% since 1980.2

Before you chalk this up as another example of ossified or sclerotic institutions, consider that other domains seem to be encountering a similar problem. For example, researchers Barbara Biasi and Song Ma find that only a fraction of schools have the resources needed to regularly update their syllabi and incorporate ‘frontier knowledge’, meaning that a large portion of schools risk teaching outdated material.3 And the increasing rate of publishing may actually be slowing down scientific progress. Past a certain point, the more papers that are published in a given field, the more citations flow to already well-cited papers. This ossifies the canon and makes it difficult for new ideas to gain traction.4

These examples highlight a concern expressed by Vannevar Bush, in his 1945 essay “As We May Think”.5 Bush saw that technological progress would continually improve compression, allowing us to transmit and store larger quantities of data and information. While the war had highlighted the capacity for the scientific community to generate new knowledge more efficiently, he worried that our ability to retrieve, distill, and apply this knowledge would suffer due to a lack of focused effort.

The process Bush was concerned about is something like refinement. Applied broadly, refinement refers to the process of turning something opaque (like crude oil or a mound of information) into forms that are more obviously useful. In one sense it’s a form of maintenance or cultivation. The goal is not growth, per se, but developing the raw materials for growth. Refined forms are portable and adoptable, like the crystalized objects of economist César Hidalgo’s Why Information Grows. This thicket of knowledge and regulation acts as a marker of progress and a burden to it. I don’t think we’re dealing with a case of too much or too little maintenance, but a case of the wrong sort of maintenance.

While existing technologies tend to service the development and transmission of information and knowledge, Artificial Intelligence (AI) and Machine Learning (ML) appears well-suited to be of assistance in curation, processing, and cleaning up regulation (the sort of maintenance we seem to be struggling with). In a study where skill sets listed in job postings for research positions were used as a proxy for ‘pervasiveness’ of a technology, researchers found that data focused technology and natural language processing—typically represented as machine learning.6

Currently, the majority of the literature on the government’s approach to AI/​ML concerns the management of others’ adoption and implementation; but dig a little deeper, and we find a potent undercurrent of agencies themselves experimenting with and using this technology to carry out their goals more effectively. (President Trumps Executive Order 13960, titled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government”, was released in late 2020 and appears to be the core document for establishing a framework for agencies’ use of AI/​ML in their work.)

In a 2020 report titled “Government by Algorithm”, researchers noted that nearly half of all federal agencies have experimented with AI and related machine learning tools. But this saturation may be superficial, as the report found that the vast majority of techniques used were unsophisticated applications or not fully developed. The majority of AI use cases involve regulatory research, analysis, and monitoring. The next most-common use case is enforcement.7 One of the more surprising findings is that over half of all applications were developed in-house; a pleasantly surprising “creative appetite within agencies.”

One of the authors of “Government by Algorithm”, Catherine Sharkey, later published a paper extensively detailing the Department of Health and Human Services’ use of AI/​ML in its Regulatory Clean Up Initiative, which was absent from the Government by Algorithm report. She found that:

HHS is using AI-driven technologies to identify outdated or overly burdensome regulations and to identify areas of duplication and overlap among agencies; it has also used AI to identify regulations for retrospective review and has put itself forward as the leading federal agency for “regulatory reform”. HHS’s Regulatory Clean Up Initiative is the first rule to emerge out of a years-long pilot project to assist agencies’ retrospective review process by identifying outdated rules. HHS reduced the number of requirements for Head Start by 40% and estimated that it had reduced paperwork by 53 million hours, saving $5.2 billion…the [natural language processing] analysis method has revealed numerous reform opportunities.8

Using AI/​ML to assist with regulatory review may prove highly profitable for both agencies and the American public. But at the moment, agencies appear to have little wiggle room to innovate or adopt tools that would boost the quality of their work. Researcher Thomas Schillemans has experimented with improving regulatory judgment by using forms of both peer and outside review from reputed experts. While his results appear promising, he also warns that increasing cognitive demands on regulators is costly.9 And in 2021, the Department of Health and Human Services was forced to walk back a proposed ‘sunset rule’ due to overwhelming concern that forcing regulation to come under review would overload existing capacity and negatively impact other work. HHS admitted that if the rule were enacted, 95% of existing regulation would be eligible for review. They also pointed out that of 18,000 existing sections, more than 12,000 were ten years or older.10

Information about both best practices and potential risks also appears widely dispersed across agencies, reports, and siloed documents. The current layout of AI being used in regulation could potentially undermine adoption, leaving substantial gains on the table. Perhaps more importantly, it could aggravate what are thought to be the largest risks with integrating AI; namely, using the technology haphazardly and undermining transparency and accountability. Sharkey gives the example of clearly defining ‘supportive’ vs. ‘determinative’ as a crucial component of accountability.

On a more positive note, it appears that there is a willingness by regulatory agencies to adopt and develop novel workflows. And there seems to be a strong wellspring of talent and enthusiasm with regards to AI/​ML. Making an effort to coordinate these energies and knowledge would be a productive step in reaping the vast potential benefits AI/​ML may unlock.

One potential solution is the establishment of a network or working group, made up of federal employees from different agencies and specialities who are enthusiastic and knowledgeable about the intersection of AI/​ML technology and existing regulatory workflows. The Plain Language Action and Information Network (PLAIN), in collaboration with the Center for Plain Language, is an excellent case study and potential model. PLAIN was influential in the passage of the 2010 Plain Language Act as well as improving the ‘information architecture’ of government service providers. The Center for Plain Language also publishes an annual report card, grading agencies on both compliance with the Plain Language Act and on the quality of their language and design on forms, documents, and websites. Both organizations also provide educational tools and training services.11

The National Artificial Intelligence Initiative (NAII) would be a good candidate for initial support and outreach, though assistance from the Office of Information and Regulatory Affairs would also be desirable. While the goal is an independent committee/​group, soliciting interest between these two organizations would likely yield a valuable cross-section of employees.

While the intersection of AI/​ML and regulatory maintenance is arguably a niche use-case, I think it is the one most worthy of dedicated, coordinated attention and development. Coordinating those with specific and expert knowledge of this niche application has several potential benefits. Those most familiar and practiced can develop recommendations for best practices and avoiding misuse (both intentional and accidental). They can bring attention to the hidden undercurrent of experimentation and innovation present within agencies and serve as a rebuttal to the impression of institutional stagnation.

A report card that highlights compliance and productivity may encourage higher adoption across agencies. It also serves as a centralized source for public and agency inquiry. Agencies are encouraged to implement AI into existing processes at their discretion, despite the fact that regulatory review is a process that is easily standardized.

Artificial Intelligence and Machine Learning are both complex, sophisticated technologies, but applications of them need not be intimidating or a threat to human judgment. The world is growing more complex, if not more dynamic, and that means that the rules and standards that guide our activities will inevitably need updating or replaced. Properly applied, they can augment and enhance human judgment and help us establish a regulatory metabolism that will power the progress we so desire.

1McLaughlin, Patrick. “On the Human Costs of the US Regulatory System: Should Congress Pressure Agencies to Make Rules Faster?” Mercatus Center, 2013. https://​​www.mercatus.org/​​publications/​​regulation/​​human-costs-us-regulatory-system-should-congress-pressure-agencies-make

2Coffey, Bentley, Patrick A. McLaughlin, and Pietro F. Peretto. “The Cumulative Cost of Regulations.” SSRN Electronic Journal, 2016. https://​​doi.org/​​10.2139/​​ssrn.2869145.

3Biasi, Barbara, and Song Ma. “The Education-Innovation Gap,” 2022. https://​​doi.org/​​10.3386/​​w29853.

4Chu, Johan S. G., and James A. Evans. “Slowed Canonical Progress in Large Fields of Science.” Proceedings of the National Academy of Sciences 118, no. 41 (2021): e2021636118. https://​​doi.org/​​10.1073/​​pnas.2021636118.

5Bush, Vannevar. “As We May Think.” The Atlantic. theatlantic, 1945. https://​​www.theatlantic.com/​​magazine/​​archive/​​1945/​​07/​​as-we-may-think/​​303881/​​

6Goldfarb, Avi, Bledi Taska, and Florenta Teodoridis. “Could Machine Learning Be a General Purpose Technology? A Comparison of Emerging Technologies Using Data from Online Job Postings,” 2022. https://​​doi.org/​​10.3386/​​w29767.

7Ho, Daniel, Catherine Sharkey, Mariano-Florentino Cuéllar, and David Freeman Engstrom. “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies REPORT SUBMITTED TO THE ADMINISTRATIVE CONFERENCE OF THE UNITED STATES,” 2020. https://​​www-cdn.law.stanford.edu/​​wp-content/​​uploads/​​2020/​​02/​​ACUS-AI-Report.pdf

8Sharkey, Catherine. “AI for Retrospective Review.” Belmont Law Review 8 (2021): 3.

9Schiliemans, Thomas. “Accountability and the Quality of Regulatory Judgment Processes. Experimental Research Offering Both Confirmation and Consolation.” Public Performance & Management Review, 2022. https://​​www.tandfonline.com/​​doi/​​full/​​10.1080/​​15309576.2022.2040034

10The proposed rule and comments can be found at https://​​www.regulations.gov/​​document/​​HHS-OS-2020-0012-0541

11See https://​​www.plainlanguage.gov/​​about/​​ and https://​​centerforplainlanguage.org/​​. The 2021 Federal Plain Language Report Card can be found at https://​​centerforplainlanguage.org/​​2021-federal-plain-language-report-card/​​

No comments.