Economic Research

This document is based testimony presented to the JEC.1 At the bottom of this document is a table of empirical AI research, which is also available at Datawrapper.de. Some of the big trends I’m seeing:

  • AI should be understood as a potential general-purpose technology, a technology capable of transforming tasks, jobs, firms, and markets, but whose diffusion will be slowed by the same frictions, transition costs, and organizational barriers that characterized earlier breakthroughs like electricity and tractors.

  • Empirical research across multiple domains consistently shows that generative AI boosts productivity especially for lower-skilled workers. In other words, AI is a skill equalizer. Yet the gains are uneven, with high-skill workers often benefiting less and, in some cases, even experiencing slower performance with AI tools. Adoption across U.S. businesses is rising but still concentrated in tech-heavy sectors, suggesting the economy remains in the early stages of the diffusion curve.

  • The labor market signals are mixed. AI-exposed occupations show employment declines among early-career workers even as older cohorts remain stable. One likely explanation is that this is a story of hiring frictions. AI-driven changes in job search make matches harder for inexperienced workers lacking strong signals.

In the final section, I argue that, as Congress contends with these changes, they should look first to existing legal systems. The real risk comes from poorly designed state laws creating a costly patchwork that stifles innovation. Congress should adopt narrow federal preemption to prevent a fragmented regulatory landscape and should consider pilot programs and sandboxes to allow new AI tech.


General Purpose Technologies and a Framework #

Economists typically make a distinction between specific technologies that fit narrow needs and general-purpose technologies (GPTs) that have a wide range of applications and can reshape broad swaths of the economy. Steam power, electricity, semiconductors, and the Internet are all canonically considered GPTs.2 But not all GPTs have the same economic impact. Electricity famously produced enormous productivity gains once factories reorganized around it – but that reorganization took decades. Broadband Internet, by contrast, has been much harder to tie to clear growth effects in the data.3 GPTs are powerful, but their actual economic payoff is inconsistent.

To understand these variations, most researchers root their analysis in the task-based approach. Consider truck driving. A truck driver doesn’t just drive a semi. He or she loads cargo, verifies paperwork, secures loads, performs safety checks, and communicates with dispatch. Each employer – FedEx, Amazon, J.B. Hunt – mixes these tasks differently depending on its business model. When a firm adopts a new technology, it is almost never replacing an entire job. It is amplifying, altering, or reorganizing specific tasks. That shift alters which workers the firm needs, how capital is deployed, and how the firm competes.

In other words, new technologies change the tasks that make up jobs. Tasks are bundled into productive combinations to make a job. Jobs combine with capital equipment to form firms. And firms compete within markets. To understand a new technology, especially a potential GPT like AI, we need to understand how AI changes tasks, and how these shifts reverberate throughout the ecosystem.

Technological change is never free. It requires buying new machines, installing new software, and reorganizing production lines. And with the explicit cost comes the implicit cost of the transition period, when the new process is not yet better than the old one. This is the essence of opportunity cost. Firms adopt a technology only when expected gains outweigh these combined costs, which can be substantial.4

These frictions and costs help explain why GPTs can be slow to diffuse. Tractors, for example, were initially adopted in the Wheat Belt of North Dakota, South Dakota, and Kansas in the 1920s. But it took another 20 years for them to become common in the Corn Belt of Iowa, Illinois, and Nebraska.5 Daniel Gross (2017) of Harvard Business School explained:

The tractor first developed for narrow applications with existing complementary equipment, exogenously high demand, and lower R&D costs, and initial diffusion was accordingly rapid for these applications, but otherwise limited in scope. Only later did tractor technology become sufficiently general for its diffusion to be broad based and pervasive. This pattern of expanding scope is consistent with other historical examples and with economic theory, which suggests that in this context, R&D will naturally progress from specific- to general-purpose variants of an innovation, and that these technical advances will (i) drive the development of additional complementary technologies, and (ii) directly translate to an increasing scope of diffusion. Lags in diffusion can therefore be the result of holdups and market failures in R&D that stymie the generalization of existing technology.6

Electricity also took time to diffuse. Though the transition started in the 1890s, it didn’t overtake steam power until around 1920. It then took another 20 years for the transition to be complete.7 Paul David (1990) summarized a decade of economic history on the transition from water and steam power to electrical power – the dynamo.8 The rise of the electric factory, as David explained it, was “a long-delayed and far from automatic business. It did not acquire real momentum in the United States until after 1914-17” when regulation changes allowed entrepreneurs to experiment with new methods of production.

Electrification took decades because factories had perfected steam-powered layouts they were reluctant to abandon. Andrew Atkeson and Patrick J. Kehoe explained that manufacturers were reluctant to close existing plants and lose embodied knowledge for what, initially, is only a marginally superior technology.9 Learning how to use new technologies efficiently took decades. New technologies or knowledge are physically embedded in specific forms – machines, factories, organizations. Realizing their benefits requires building or redesigning those material systems.

These historical lessons help explain why generative AI is attracting such attention from economists and policymakers. Like past general-purpose technologies, AI today is not a single product but a range of different applications, each flexible and increasingly capable of doing complex tasks. This generative capacity allows the technology to substitute for and complement cognitive labor in ways that earlier digital tools could not. Generative AI is not just faster software. It can produce ideas, draft solutions, and reshape workflows – provided that firms can overcome all of the same frictions, transition costs, and organizational barriers that will come with AI.

History can also provide a rich set of diagnostic questions about which frictions slow adoption. Adopting the work of Atkeson and Kehoe, for example, policymakers should be asking:

  • How much organization-specific knowledge have companies built up with current (non-AI) technologies? Can AI be “bolted on” to existing processes, or does it require fundamental redesign like electricity did?
  • Is AI currently “only marginally superior” to existing approaches, or is the gap larger?
  • What institutional and regulatory barriers prevent companies from adopting AI even when it’s clearly superior?
  • Is AI more or less embodied than electricity was? Electricity required complete factory redesign; does AI require similar organizational restructuring?
  • Where is AI “embodied”? In trained models? In data pipelines? In organizational workflows?

If generative AI is to function as a true general-purpose technology, its impact must extend beyond impressive demos and headline-grabbing capabilities. It must alter the underlying mechanics of production, which is the focus of the empirical research reviewed in the next section.


Skill and Task Changes #

Empirical research has begun to clarify how generative AI is reshaping work and productivity. These studies offer a more grounded picture than the speculative debates a decade ago. One consistent finding is that generative AI can produce substantial productivity gains in a variety of settings – customer support, legal work, and software development, among others. But this is especially true for lower-skilled workers.

Brynjolfsson, Li, and Raymond (2023) provided one of the earliest causal estimates of how large language models reshape work inside real firms.10 This paper analyzed the staggered rollout of an AI-powered assistant to 5,179 customer support agents and found that access to the tech substantially increases worker productivity. Measured by issues resolved per hour, the average increase was 14 percent. The effects, however, were far from uniform. Novice and lower-skilled agents saw dramatic gains, with productivity rising roughly 34 percent, while experienced, higher-skilled agents saw little change.

The study also found that the availability of the AI assistant reduced employee attrition. Workers who used the tool were less likely to quit – a pattern consistent with reduced stress, fewer frustrating interactions, or a stronger sense of mastery over daily tasks. Not only did generative AI make agents more productive, it also appeared to make the job itself more manageable and rewarding.

This paper matters because it demonstrates not just efficiency gains, but distributional effects. AI can boost the output of lagging workers and can meaningfully improve workplace dynamics. But this pattern is not just confined to customer support. In software development, legal reasoning, professional writing, and creative work, generative AI is consistently altering which tasks workers perform and how skill advantages translate into output.

Hoffmann et al. (2024) exploited a natural experiment created by GitHub’s rollout of Copilot to understand how generative AI reshapes the internal organization of work.11 When developers gained access to Copilot, their task portfolio shifted. They spent more time writing and modifying code and less time on peripheral project-management tasks. AI effectively pushed workers toward the core of their job.

Two mechanisms seem to explain the shift. First, Copilot increased autonomous work by handling some of the routine. Second, it encouraged exploration by lowering the cost of trying new approaches. Notably, these effects were strongest among lower-ability developers, who saw the largest gains in autonomy and exploratory coding. The result was a workforce whose internal task composition leaned more heavily toward the productive frontier.

In the legal domain, AI access seems to speed up the process. Choi et al. (2023) conducted a randomized controlled trial with law students using GPT-4 to complete legal analysis tasks.12 While AI support produced only modest and inconsistent improvements in the quality of legal reasoning, it produced large and uniform gains in speed. Students completed tasks faster regardless of prior ability, but the largest improvements occurred among the least-skilled participants. Participants also reported satisfaction with using the tool, suggesting human-AI complementarity might lead to better satisfaction with work.

Generative AI also seems to boost productivity in writing. Noy and Zhang (2023) ran an online experiment with 453 college-educated workers who completed writing tasks tied to their occupations.13 Participants who had access to ChatGPT produced noticeably better work – average quality increasing by 18 percent – and finished their assignments much faster, cutting task time by about 40 percent. Similarly, Hauser and Doshi (2024) found that AI can boost writing creativity.14 By studying nearly 300 people who wrote short fictional stories, participants who received ideas generated by GPT-4 produced stories that independent evaluators judged as more creative, better written, and more enjoyable. The strongest gains appeared for participants who were less naturally creative. However, participants anchored their work on the suggestions provided by the model, which raised average quality but narrowed the range of creative outcomes.

There are limits, as Becker, Rush, Barnes, and Rein (2025) show.15 Their study is more limited – a randomized control trial with just 16 experienced open-source developers completing 246 tasks – but it added a critical finding. The researchers asked the developers and outside experts how fast they thought tasks would be completed. Developers estimated that the AI tool reduced completion time by 20 percent; experts in economics and machine learning predicted 38 and 39 percent reductions, respectively. However, AI actually increased task times by 19 percent. This work makes clear that AI has real limits, especially at the upper end of the skill distribution.

The domain of education remains a black box. It is known that students and teachers are using generative AI at rapidly rising rates, but rigorous causal studies remain rare. Roldán-Monés (2024) is an exception. In this randomized controlled trial, students prepping for a university debating competition were randomly assigned GenAI to help prepare. In a result that bucked the trend, high-ability students actually benefited more from generative AI than lower-ability students.16

One area that seems especially promising involves neurodivergent workers. A recent study from the United Kingdom’s Department for Business and Trade found that neurodiverse employees were significantly more enthusiastic about AI assistants than their neurotypical peers.17 They reported higher satisfaction with the tools and were about one quarter more likely to recommend them to others. Interviews from people with ADHD, dyslexia, and autism reveal similar patterns.18 Many report that AI systems help them break down and organize complex tasks, improve the clarity of their written communication, and manage cognitive load in ways that traditional workplace tools do not. Generative AI could meaningfully expand the range of workers who can thrive in certain roles, raising important questions about inclusion and accessibility.

Still, there is very little evidence on the best way to train people to use these tools effectively. Most firms are experimenting in real time, often relying on informal peer learning rather than structured training programs, and the education sector has only begun to test targeted curricula. Developing rigorous research on AI training and skill acquisition will be essential for understanding how quickly generative AI can diffuse through the workforce and which workers will benefit the most.

The emerging evidence points to several broad lessons. The technology consistently delivers its largest gains for workers with weaker initial skills, both by accelerating routine tasks and by improving the quality of their output. It also appears to reshape task portfolios, pushing many workers toward more productive activities and away from administrative or peripheral work. At the same time, the benefits are uneven. High performers who rely on deeply ingrained heuristics or finely tuned workflows often see smaller improvements, and in some cases AI tools can even slow them down.

The central lesson is that generative AI is a skill equalizer. It strengthens some workers, challenges others, and requires a deeper understanding of how people acquire and deploy skills in environments where machines can produce ideas and perform reasoning steps alongside them.


How Companies Are Reacting #

The U.S. Census Bureau’s Business Trends and Outlook Survey (BTOS) provides the most comprehensive real-time picture of AI adoption across the American economy. Released biweekly, the BTOS captures responses from approximately 1.2 million businesses through a rotating panel design. The survey’s AI supplement measures the use of machine learning, natural language processing, virtual agents, and voice recognition technologies, and is available by sector, state, and the 25 most populous metropolitan areas.

Since September 2023, the share of U.S. businesses using artificial intelligence in producing goods or services has more than doubled, rising from 3.7 percent to 9.9 percent as of September 2025. While this represents consistent growth, the vast majority of businesses – some 83 percent – still report no AI usage in their operations. Meanwhile, about 7 percent of businesses indicate they do not know whether they use AI technologies.

AI adoption varies dramatically across U.S. industry sectors. The information sector leads adoption with 29.9 percent of businesses reporting AI use as of September 2025, up from 13.9 percent in September 2023. This sector includes publishing, software, telecommunications, and data processing. Professional, scientific, and technical services follows at 19.8 percent, and finance and insurance at 17.8 percent.

In contrast, most sectors remain in single digits. Traditional industries show particularly low adoption: construction (3.0 percent), retail trade (4.2 percent), and accommodation and food services (2.9 percent). The disparity suggests AI adoption is concentrated in knowledge-intensive and technology-oriented industries, while labor-intensive and consumer-facing sectors have been slower to integrate these technologies.

While aggregate adoption remains uneven, new experimental evidence shows what happens inside firms once generative AI is actually deployed. A recent field experiment at the National Bank of Slovakia randomly assigned access to generative AI to workers to test the tech’s impact.19 The researchers mapped all work onto a standard task-based framework to understand which tasks were altered. The tool proved especially complementary to nonroutine work, where judgment, synthesis, and idea generation are central to performance.

Routine work showed a more complicated pattern. Employees in routine roles experienced some of the largest individual productivity gains, but generative AI was less effective when the task itself required high levels of structure and repetition. This produced a notable mismatch between the workers who benefit the most and the tasks for which the technology is best suited. A simulation exercise suggests that if the organization reallocated workers across tasks to better match these strengths, total output could rise by more than 7 percent.

The experiment also uncovered differences across skill levels. Lower-skill workers saw the biggest improvements in quality, while higher-skill workers mainly benefited through time savings and greater efficiency. Together, these findings provide some of the strongest empirical evidence to date that generative AI reshapes productivity through task-level complementarities rather than simple automation, with important implications for how firms organize labor and how AI may diffuse through labor markets more broadly.

Jagged Intelligence, Jagged Adoption #

At Davos this week, Demis Hassabis, the CEO of Google DeepMind, gave a standout talk about the path he believes AI will take in the coming years. While he thinks the tool will be utterly transformative, Hassabis still warned that the models have “these jagged edges that they’re good at and not good at.” Hassabis’s comments were a nod to AI pioneer Andrej Karpathy, who coined the term “jagged intelligence” to describe the uneven outputs from advanced models. In some domains, they are capable of superhuman performance while remaining incompetent in others.

Likewise, the adoption of AI models will be jagged. Real-world workflows are constrained by bottlenecks, where a single weak link can spoil the value of the entire process. Automating one task in isolation often does little, and can even backfire, if the remaining human tasks become more important, more time-consuming, or harder to automate. As a result, AI adoption is unlikely to proceed in a smooth fashion. Instead, firms may stall for long periods then jump in adoption once enough complementary pieces fall into place to clear the bottleneck.

One of the best surveys on these bottlenecks come from the Census’ Management and Organizational Practices Survey (MOPS), which Kristina McElheran analyzed for her talk at the Allied Social Sciences Association (ASSA) earlier this month. The two biggest constraints faced by businesses were the cost of AI adoption and finding business cases.

Still, this data comes from 2021: before ChatGPT went public. To better understand what’s currently happening, I collected a number of business surveys from the past year, which asked professionals about AI bottlenecks, use, and the productivity gains they’re experiencing from the tech.

These surveys should be taken with a grain of salt. They rely on self-reported data from executives who may have incentives to overstate adoption or returns. The samples skew toward large enterprises and many are produced by consulting firms that sell AI implementation services. On top of this, AI use is defined inconsistently across the board, making direct comparisons difficult.

That said, the gap between adoption and profitability is hard to dismiss.

When looking across these surveys, a picture emerges. AI adoption is advancing rapidly in experimentation yet remains stubbornly uneven in deployment and value creation. Most firms now use AI in at least one business function, yet the majority are still stuck in pilots, with only a minority achieving enterprise-wide scale or measurable bottom-line impact. Returns take years to materialize and are especially elusive for more ambitious systems like agentic AI.

The profit gap is especially stark:

  • MIT NANDA (2025): 95 percent have had no measurable P&L impact; only 5% are extracting significant value
  • McKinsey (2025): more than 80 percent report no tangible impact on enterprise-level EBIT
  • BCG (2024): Just 4 percent have developed capabilities that consistently generate significant value; another 22 percent are “beginning to realize substantial gains”

Many of the bottlenecks aren’t technical. MIT NANDA concludes that the divide between the 5 percent extracting value and the 95 percent that aren’t “does not seem to be driven by model quality or regulation, but seems to be determined by approach.” Fragmented and low-quality data, siloed IT architectures, and persistent difficulty measuring ROI are cited alongside broader organizational change as reasons for slow adoption. Still, inaccurate AI outputs have remained a key barrier on the technical side.

Human and organizational factors loom just as large. Workforce reskilling, change management, and redesigning workflows around AI are repeatedly identified as decisive for success. Meanwhile, leaders understand they are overestimating their readiness for AI. Some 63 percent of executives call AI implementation a high priority, but 91 percent don’t feel prepared to execute. There is also a startling attrition rate. S&P Global finds that 42 percent of companies have abandoned some AI initiatives before reaching production.

In short, AI adoption is jagged not because progress in models is uneven, but because production systems are. Until firms address the organizational, data, and governance constraints that bind their workflows, AI’s transformative potential will continue to arrive in fits and starts rather than as a smooth diffusion curve.


The Labor Market #

In late summer 2025, a chart from the Federal Reserve Bank of New York went viral for showing that recent computer engineering grads had the third worst market prospects.20 It followed alongside a spate of reports about the dismal job market for recent college graduates. The episode quickly became a focal point for worries that AI might already be reshaping early-career opportunities in unexpected ways.

As Connor O’Brien noted in a careful review of the underlying data, the chart’s alarming implications stem largely from very small subsamples within the American Community Survey (ACS). As he explained:

While the ACS is itself a large survey, the sub-samples highlighted in the chart – young college graduates with very specific majors – are quite small. As a result, the confidence intervals around these estimates are huge.

Take the estimated 7.5 percent unemployment rate for computer engineering majors, for example. Using special weights provided by the ACS, we can calculate the confidence interval on this estimate. We find that we can say with 95 percent confidence that the unemployment rate for new computer engineering graduates is somewhere between four percent and 11 percent. For perspective, the most we can feel comfortable saying is that the job market for those graduates looks like something between the absolute height of the Great Recession and the booming job market of 2019, just before the pandemic. That’s it.21

Two weeks after that came out, economists Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen published what is arguably the strongest empirical study to date on how AI is affecting early-career workers: “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.”22 Their analysis brings much-needed clarity to a debate that had been dominated by anecdotes and viral charts. The authors use detailed labor market data to isolate how employment patterns have shifted in occupations, summarized in a series of six facts:

  • There have been substantial declines in employment for early-career workers (ages 22-25) in occupations most exposed to AI, such as software developers and customer service representatives.
  • Workers aged 22 to 25 have experienced a 6 percent decline in employment from late 2022 to July 2025 in the most AI-exposed occupations, compared to a 6-9 percent increase for older workers.
  • Not all uses of AI are associated with declines in employment. Entry-level employment has declined in applications of AI that automate work, but not those that most augment it.
  • Employment declines for young, AI-exposed workers remain after conditioning on firm-time effects like interest rate changes.
  • The labor market adjustments are visible in employment more than compensation.
  • The above facts are largely consistent across various alternative sample constructions.

Given all of the available evidence, it’s entry-level workers who might be affected by AI even as the rest of the job market continues to chug along. However, it remains an open question why just entry-level workers are being affected. Large language models have shifted the fundamental mechanics of how workers and employers find each other. Before generative AI, employers could rely on automated screening tools, while workers had to manually tailor applications for each job, repeatedly entering the same data.

ChatGPT changed this equilibrium by allowing job seekers to apply at scale while also enabling employers to post more low-commitment openings. The result is a noisier labor market. Because AI makes it harder for both applicants and employers to signal genuine intent, there are fewer matches for those with non-differentiated skills. On the other hand, skilled workers don’t face this dilemma because they have portfolios, referrals, and networks. This shift could help explain why early-career workers may be struggling in AI-exposed fields even as everyone else is doing fine. They often lack the credible signals that experienced workers possess.

Economist Jack Meyer recently mapped out the Beveridge curve for two different cohorts – college graduates from 2010 to 2020, and graduates from 2021 to 2025 – to show how generative AI may be degrading the efficiency of the hiring process itself.23 The Beveridge curve plots the relationship between job vacancies and unemployment, and when it shifts outward, it signals that the labor market is becoming worse at turning openings into hires. Meyer argues that this is exactly what we are now seeing for recent graduates.

These findings point to a labor market that is not collapsing under the weight of automation, but one that is becoming noisier, slower, and less efficient at matching young workers to opportunities. But the implications extend well beyond hiring. The forces reshaping entry-level jobs are also bringing about calls for regulatory action.



Notes


  1. Korinek, A. (2025). The Economics of Transformative AI. NBER. https://www.nber.org/reporter/2024number4/economics-transformative-ai ↩︎

  2. Bresnahan, T. F., & Trajtenberg, M. (1995). General purpose technologies ’engines of growth’? Journal of Econometrics, 65(1), 83-108. https://doi.org/10.1016/0304-4076(94)01598-t ↩︎

  3. Stanley, T. D., Doucouliagos, H., & Steel, P. (2018). Does ICT generate economic growth? A meta-regression analysis. Journal of Economic Surveys, 32(3), 705-726. https://doi.org/10.1111/joes.12211 ↩︎

  4. Holmes, T. J., Levine, D. K., & Schmitz, J. A. (2012). Monopoly and the incentive to innovate when adoption involves switchover disruptions. American Economic Journal: Microeconomics, 4(3), 1-33. https://doi.org/10.1257/mic.4.3.1 ↩︎

  5. Gross, D. (2017). Scale versus Scope in the Diffusion of New Technology: Evidence from the Farm Tractor. https://doi.org/10.3386/w24125 ↩︎

  6. Ibid. ↩︎

  7. David, P. A. (1990). The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox. The American Economic Review, 80(2), 355-361. http://www.jstor.org/stable/2006600 ↩︎

  8. Ibid. ↩︎

  9. Atkeson, A., & Kehoe, P. (2007). Modeling the Transition to a New Economy: Lessons from Two Technological Revolutions. The American Economic Review. https://doi.org/10.3386/w8676 ↩︎

  10. Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (NBER Working Paper No. 31161). National Bureau of Economic Research. https://doi.org/10.3386/w31161 ↩︎

  11. Hoffmann, M., Boysel, S., Nagle, F., Peng, S., & Xu, K. (2025). Generative AI and the nature of work (Harvard Business School Strategy Unit Working Paper No. 25-021). https://doi.org/10.2139/ssrn.5007084 ↩︎

  12. Choi, J. H., Monahan, A., & Schwarcz, D. (2023). Lawyering in the age of artificial intelligence (Minnesota Legal Studies Research Paper No. 23-31). Minnesota Law Review, 109. https://doi.org/10.2139/ssrn.4626276 ↩︎

  13. Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187-192. https://doi.org/10.1126/science.adh2586 ↩︎

  14. Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10, eadn5290. https://doi.org/10.1126/sciadv.adn5290 ↩︎

  15. Becker, J., Rush, N., Barnes, E., & Rein, D. (2025). Measuring the impact of early-2025 AI on experienced open-source developer productivity. arXiv. https://arxiv.org/abs/2507.09089 ↩︎

  16. Roldán-Monés, T. (2024). When GenAI increases inequality: Evidence from a university debating competition (Working paper). EsadeEcPol. https://www.esade.edu/ecpol/wp-content/uploads/2019/09/2409-ChatGPTRoldan_ecpol.pdf ↩︎

  17. Department for Business and Trade. (2025). The evaluation of the M365 Copilot pilot in the Department for Business and Trade. https://assets.publishing.service.gov.uk/media/68adbe409e1cebdd2c96a19d/dbt-microsoft-365-copilot-evaluation.pdf ↩︎

  18. Curry, R. (2025, November 8). People with ADHD, autism, dyslexia say AI agents are helping them succeed at work. CNBC. https://www.cnbc.com/2025/11/08/adhd-autism-dyslexia-jobs-careers-ai-agents-success.html ↩︎

  19. Marsal, A., & Perkowski, P. (2025). A task-based approach to generative AI: Evidence from a field experiment in central banking. SSRN. https://doi.org/10.2139/ssrn.5228176 ↩︎

  20. The labor market for recent college graduates. Federal Reserve Bank of New York. (2025). https://www.newyorkfed.org/research/college-labor-market ↩︎

  21. O’Brien, C. (2025, August 13). A viral chart on recent graduate unemployment is misleading. https://agglomerations.substack.com/p/a-viral-chart-on-recent-graduate ↩︎

  22. Brynjolfsson, E., Chandar, B., & Chen, R. (2025). Canaries in the coal mine? Six facts about the recent employment effects of artificial intelligence. Stanford Digital Economy Lab. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf ↩︎

  23. Meyer, J. (2025, November 9). AI might not be taking your job anytime soon. https://jackbmeyer.substack.com/p/ai-might-not-be-taking-your-job-anytime ↩︎