A Summary of My AI Work
My work on AI tends to fall into three categories: using LLMs for cost benefit analysis, policy analysis of state and federal AI laws, and the economics of AI and transformative AI.
LLMs for cost benefit analysis #
I’ve been experimenting with large language models (LLMs) to estimate AI bill compliance costs. In March 2025, I published “ How much might AI legislation cost in the U.S.?,” which compared official cost compliance estimates to leading LLMs for two recent amendments to the California Consumer Privacy Act (CCPA) and regulations implementing President Biden’s Executive Order on AI.
After doing a deep dive into these three regulations, I then prompted ChatGPT, Claude, and Grok to act as compliance officers at companies after reading each of the new rules with the goal of estimating the hours needed for first-year implementation and ongoing compliance.
What was surprising is that the LLMs were usually close to first year estimates, but they tended to predict much higher ongoing annual costs, suggesting a systematic underestimation. The chart below displays all of the estimates.
Similar to discounted cash flow (DCF) analysis, we can think of regulation as a stream of future costs that a regulation is expected to generate over the 10 year period. By summing them all up and then discounting them back to present value using a discount rate that reflects the costs’ long-term time horizon, we can estimate the current market value of the regulation. The table below calculates those regulatory costs using the federal two percent discount rate for each of the scenarios.
All of the graphics from piece are listed below:
- Costs for California’s Risk Assessment (RA) Rules
- Costs for California’s Automated Decision Making Technology (ADMT) Rules
- Market Rate Costs for California’s CCPA Rules
- Discounted Regulatory Cost Analysis
- Compliance Cost Estimates for Three AI Regulations
- Discounted Regulatory Costs for Three AI Regulations
In June, I published a piece in the City Journal on New York’s RAISE Act which I subjected to the same kind of method. As I wrote,
I asked the leading LLMs to read the RAISE Act and estimate the hours needed to comply with the law in the first year and in every year after that for a frontier model company. The results, displayed in the table below, suggest that initial compliance might fall between 1,070 and 2,810 hours—effectively requiring a full-time employee. For subsequent years, however, the ongoing burden was projected to be substantially lower across all models, ranging from 280 to 1,600 hours annually.
The wide range in estimates underscores the fundamental uncertainty with the RAISE Act and other similar bills. The fact that sophisticated AI models are not converging on consistent compliance costs suggests just how unpredictable this legislation could prove in practice. The market is moving quickly. We need laws that prioritize effective risk mitigation over regulatory theater.
A chart of the costs of the RAISE Act is available here and also posted below.
During a Hackathon in May, I coded up a first version of a prompt script, which I am going to iterate on in the future. It is built on persona prompting, where you direct the LLM to take on a role and then answer questions. So, I set up the script to reflect different industries, resources, legal teams, and familiarity with the law.
The benefit of this kind of scripting is that you can run many estimates simultaneously and then summarize the results. In the future, I am going to match these personas with what we know about the market to create more accurate predictions. The code is still messy, but I intend to come back to it when I work on the full paper this fall.
My recent article, “ The Hidden Price Tag of California’s AI Oversight Bill,” continues this line of research. For this piece, I thought I would try to push the LLMs further by analyzing the potential impact of California’s AB 1018.
Although it didn’t pass, this bill would applied regulations to any decision that impacts the cost, terms, quality, or accessibility for employment-related decisions; education and vocational training; housing and lodging; anything that involves your utilities; family planning, adoption services, and reproductive services; health care and health insurance; financial services; the criminal justice system; legal services; arbitration; mediation; elections, access to government benefits or services; places of public accommodation; insurance; and internet and telecommunications access. Even California’s State Water Board warned that Excel workbooks could trigger regulatory requirements. So I wondered, could LLMs help figure out which businesses would be regulated?
For the first part of this project, I followed the typical method of running a cost calculation in public policy like I did in my two previous piece. First, you estimate the hours of compliance ( table), then multiply it against market labor rates ( table) to calculate an economic cost for a firm. The cost compliance for individual firms are detailed below.
Then you take this number and multiply it against the number of impacted businesses. However, estimating the number of impacted businesses tends to be a blunt measure, so I used a second set of scripts to then estimate which industries are likely to be affected by the law. All of the data can be found in this spreadsheet. From here, I projected these costs over a ten-year period and applied standard economic methods to arrive at a discounted cost, as I detailed above.
The result were three tables: an estimate of “ Economy-wide Discounted Regulatory Costs,” another estimate for “ Economy-wide Discounted Regulatory Costs with 5% Compliance,” and finally an estimate for “ Discounted Regulatory Costs for Individual Companies.” I tend to think that the two economy-wide estimates were heroic, writing that,
The LLM classifications represent informed predictions rather than definitive legal interpretations of AB 1018’s scope. More importantly, the NAICS matching process necessarily involves aggregation. Specific business types identified by LLMs are matched to broader industry categories in the official data. This means that the estimates are sure to be higher than actual impact.
However, I am more confident in the ten year totals for individual companies, which represent the value of sustained compliance costs that firms would need to factor into their long-term business planning and automated system adoption decisions. That chart is reprinted below.
Still, these rough estimates represent just the tip of the iceberg. They capture only the direct compliance costs of hiring staff, conducting audits, implementing new processes, and maintaining documentation. What they don’t account for are the cascading economic effects that would ripple through entire sectors. Every dollar spent on regulatory overhead is a dollar not invested in innovation, service improvements, or competitive pricing. For the economy as a whole, it would represent a massive shift of resources from productive activities to regulatory compliance.
In the coming months, I will formalize all of this work in a academic paper.
State and Federal Regulation #
We need to get ahead of this thing is a popular phrase among policy makers when discussing AI. But this framing misses a crucial point: Significant AI regulation is already happening, in the states, through regulatory agencies and the executive, as well as in Congress and the courts.
In “ AI’s Automatic Stabilizers,” I walked through the governance mechanisms that are already regulating AI systems. Like the automatic stabilizers in fiscal policy that steady the economy without new laws, AI’s regulatory stabilizers are embedded in existing law and will continue to guide AI development even without a comprehensive federal AI statute. This includes:
- Consumer protection authorities, both federal and state, police “unfair or deceptive acts or practices.” The Federal Trade Commission has made clear it intends to use this power to its fullest;
- Property and contract law, now central to copyright disputes involving OpenAI, Microsoft, Meta, and others;
- Tort and common law, which let injured parties seek damages from AI-related harms;
- Product recall authority, as shown when NHTSA ordered Tesla to recall its autonomous driving software;
- Insurance and compensation systems, which indirectly shape AI risk-taking by pricing liability; and in
- Sectoral regulatory adaptation, where agencies such as the Department of Education, EEOC, CFPB, FCC, and FEC are extending existing frameworks to AI systems.
Indeed, there is a value in waiting to regulate, which I explored in “ The value of waiting: What finance theory can teach us about the value of not passing AI Bills.” Borrowing a concept from real options theory, I explained why acting too early can eliminate future flexibility. Similar to companies looking to invest, regulators possess a regulatory real option where the smart choice is often to wait for more information rather than rushing to act.
The real danger isn’t under-regulation, but fragmentation as states to write their own rules. Nevertheless, that fragmentation is already underway.
In “ The Lurking Dangers in State-Level AI Regulation,” I laid out my concerns with Colorado’s AI Act, the first comprehensive state AI law. Governor Jared Polis signed the bill into law but with deep reservations, warning in a signing letter about “the impact this law may have on an industry that is fueling critical technological advancements” and noting that “government regulation that is applied at the state level in a patchwork across the country can have the effect to tamper innovation” It was sad to see Polis not put up a fight against the bill, especially since the signing letter highlights the problems that plague every AI bill.
While it thankfully didn’t pass, California’s SB 1047 would have been an even bolder step than what Colorado has adopted. In “ A New Proposed AI Regulation in California” and “ California’s SB 1047 Moves Closer to Changing the AI Landscape,” I unpacked SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have subjected advanced models to safety assessments, kill switches, certification regimes, and broad “know-your-customer” obligations. SB 1047 was the first AI safety bill to gain traction. It was designed to explicitly target frontier models, regulating the act of developing intelligence itself, by imposing sweeping pre-deployment safety obligations on model creators. Yet few advocates seem to appreciate the First Amendment concerns and the challenges in regulating for bias and fairness.
California and Colorado aren’t the only states toying with AI legislation. Texas and Virginia have followed with their own proposals, which I discussed in “ The Best AI Law May Be One That Already Exists.” Texas’s TRAIGA would have required AI distributors to prevent algorithmic discrimination even though companies are already subject to anti-discrimination laws, and would have created a new regulatory body with broad powers to issue binding rules on “ethical AI development.” Virginia’s HB 2094, which was eventually vetoed by the governor, borrowed heavily from the EU’s regulatory playbook with similarly vague language around “consequential decisions” and “high-risk” applications.
I expect that the states will continue to adopt AI bills. In the next decade, we are likely to see “a patchwork of fifty AI laws, each trying to get ahead of the future," as I warned in “ The Best AI Law May Be One That Already Exists.” That outcome would mirror the fragmentation we saw in state privacy laws, which I have written about before. It would mean duplicative compliance regimes, overlapping definitions, and conflicting obligations that raise costs without improving accountability. Unless Congress steps in with a preemptive framework or courts intervene on constitutional grounds, America’s comparative advantage in innovation could be blunted not by a single act of overreach, but by hundreds of well-intentioned state experiments.
The Executive has also been active on AI policy.
In the fall of 2023, the Biden Administration adopted the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” As I explained in detail in “ Problems with Biden’s Executive Order on Artificial Intelligence,” this EO represented the most sweeping assertion of regulatory authority in decades. Important to note, it invoked the Defense Production Act to compel developers of frontier AI models to share testing and safety data with the government. I suspect that this won’t be the last time this law is used to justify AI regulation.
I also filed comments in the Trump Administration’s “AI Action Plan.” While there is a lot to these comments, I would bring your attention to three parts. First, it makes the case that state level AI regulation presents the greatest risk, noting that “The White House would do well to push back against a tangle of conflicting state rules that make cutting-edge AI too costly or risk-laden to develop and deploy.” Second, the Administration should champion permitting reform modeled after the successful Prescription Drug User Fee Act (PDUFA). This law has successfully accelerated pharmaceutical approvals without compromising safety standards by allowing applicants to fund expedited reviews. Third, I called for OIRA to pilot a project that uses AI simulations to standardize how agencies model compliance burdens across diverse businesses.
Instead of blithely pushing for new rules, policy makers should be using AI tools to clean up government processes. In an op-ed in Fox News titled “ Let’s use AI to clean up government,” I introduced the notion of ChatGVT, a framing device to explore how LLMs could “provide straight answers about the newest tax plan, if a bill is stuck in committee, or the likelihood that a piece of legislation will pass. Or a ChatGVT could be turned on the regulatory code to understand its true cost to households and businesses.” I extended this analysis in “ Government in the Age of AI,” ultimately concluding that the “promise of AI to revolutionize government efficiency is undeniable, but realizing these benefits will require careful implementation that prioritizes accuracy, transparency, and constitutional protections.”
Economics of Transformative AI #
When people ask how AI will reshape the economy, they usually want a simple and clean answer, like a chart showing inevitable job losses or explosive productivity. But that’s not how technology actually works. It’s uneven, conditional, and shaped by the structure of firms, markets and regulation. To help understand these changes, I’m constantly updating a table titled “ Papers on the Economics of AI,” which compiles all of the empirical work in economics.
Most assume that the decision to adopt a new technology follows a simple logic, you invest when the expected benefits exceed both types of costs. Yet in practice, the real determinant of success is whether the technology integrates smoothly with the firm’s existing production processes. For my newsletter in The Dispatch, I wrote a two-part series on the economics of AI that discussed “ The Economics of AI and the Impending Robot Takeover” and “ Transformative Growth with AI Is Likely. Explosive Growth Is Science Fiction.” I’m finding it all too common that people simply dismiss the effort that’s needed to transform a company, let alone transform an industry, with a technology like AI. Adoption doesn’t occur all at once. Firms must not only invest in hardware and software but also reconfigure workflows, retrain workers, and rebuild managerial hierarchies. These are nontrivial costs, which I have been bringing attention to since at least 2020 in a piece titled “ Tracing the impact of automation on workers and firms.”
In “ To Understand AI Adoption, Focus on the Interdependencies” I drew a parallel to the telephone switchboard, which was an innovation that took decades to become automatic because it was entangled with other organizational systems. The interdependencies between call switching and other production processes within the firm presented an obstacle to change. But the same is true today of firms thinking of adopting AI. The interdependencies between AI and other production processes will also be obstacle to change.
Moreover, people tend to couple robots with advanced AI tech. But when you look at the data, as I did, you learn that industries investing the most in robotics tend to be using AI the least. Manufacturing and retail trade spend the most on robotic equipment but they aren’t going big on machine learning, natural language processing, virtual agents, and the like.
Another strand of my work connects the economics of AI to the political economy of semiconductors. In “ Nvidia’s Blockbuster Quarter and the Value of ‘Compute’,” I analyzed Nvidia’s extraordinary rise and the broader economic transformation driven by data centers, energy demand, and chip supply chains. Nearly 40 percent of Nvidia’s revenue now comes from AI compute infrastructure in the form of servers, GPUs, and networking gear that power model training and inference. This surge has turned compute into a new economic input alongside labor and capital.
The result is a new convergence between AI policy, energy policy, and industrial strategy. The CHIPS and Science Act, which I examined in my paper “ The CHIPS Act and Semiconductor Economics,” is best understood not just as an industrial subsidy but as a reallocation of risk. In other words, it is a public bet on the domestic supply of the new factor of production: compute.
If there is a unifying argument across this body of work, it’s that AI-driven growth will be transformative but constrained. It will be accelerated in some sectors, delayed in others, and filtered through institutional complexity.