Summary of Recent Polling Data (2024–2025) #
To provide a concise overview, the table below summarizes key findings from major public opinion polls on AI in the United States since 2024. It includes the poll source and date, the population sampled, and highlights of the results (including any notable demographic breakdowns):
Poll (Date) | Source & Sample | Key Findings on AI Attitudes |
---|---|---|
Pew Research (Aug 2024) (reported Apr 2025) pewresearch.org pewresearch.org | Pew Research Center – 5,410 U.S. adults (national) | 51% “more concerned than excited” about AI’s growing use, vs. 11% “more excited” (38% mixed feelings) pewresearch.org. Only ~one-in-ten think AI will have a positive impact on key areas like jobs or the economy pewresearch.org; in fact, just 10% say AI will improve elections or news media pewresearch.org. 55% want more control over AI in their lives pewresearch.org, and about 60% worry the government won’t regulate AI enough pewresearch.org. 62% have little or no confidence in the federal government to regulate AI effectively, and 59% lack confidence in companies to use AI responsibly pewresearch.org. Women are more wary than men (e.g., only 12% of women vs. 22% of men expect AI to benefit the U.S. in the long run) pewresearch.org. Younger adults and those with college degrees report greater familiarity with AI than older, less-educated adults pewresearch.org. |
Gallup/Bentley (Apr–May 2024) (reported Aug 2024) news.gallup.com news.gallup.com | Gallup Panel (with Bentley Univ.) – 5,835 U.S. adults | Americans see more downsides than upsides to AI in 2024, though pessimism eased slightly from 2023. 31% say AI does more harm than good, only 13% say more good than harm, while the majority (56%) call it a “net neutral” mix news.gallup.com. (In 2023, 40% saw mostly harm, so extreme worry fell by 9 points news.gallup.com.) Nearly 75% believe AI will reduce the number of U.S. jobs in the next decade news.gallup.com (essentially unchanged from 2023). Similarly, 77% do not trust businesses much or at all to use AI responsibly news.gallup.com. Specific uses: 85% are concerned about AI in hiring decisions, 83% concerned about AI driving vehicles, and 80% concerned about AI recommending medical treatments news.gallup.com. Even AI’s least concerning use (helping students study) still worries ~66% of Americans news.gallup.com. 64% feel at least “somewhat knowledgeable” about AI (9% extremely, 55% somewhat) news.gallup.com, but knowledge drops sharply for ages 60+ and is higher among men (72% at least somewhat) than women (57%) news.gallup.com. Those highly knowledgeable about AI are less likely to fear it – yet even among the most knowledgeable, more believe AI is harmful than think it’s beneficial (31% vs 22%) news.gallup.com. |
Quinnipiac (April 3–7, 2025) poll.qu.edu poll.qu.edu | Quinnipiac Univ. Poll – 1,562 U.S. adults (nationwide) | Americans have mixed feelings: 44% say AI will do more harm than good in their daily lives, while 38% expect more good than harm poll.qu.edu. Views diverge by income – 60% of $200k+ earners foresee AI doing more good, vs 59% of <$50k earners who foresee more harm poll.qu.edu. By sector, opinions vary: 54% believe AI will harm the education system (only 32% say help) poll.qu.edu, but 59% believe AI will help medical advances (24% say harm) poll.qu.edu. There is broad concern for children: 83% are concerned AI will impede the youngest generation’s ability to think independently (54% very concerned) poll.qu.edu. Majorities say businesses (73%) are not transparent enough about AI use poll.qu.edu and government (69%) isn’t doing enough to regulate AI poll.qu.edu. On jobs, 56% expect AI to decrease jobs nationwide poll.qu.edu, but only 21% of employed people worry their own job will be replaced by AI poll.qu.edu (“workforce paradox”). In fact, 39% of workers are actively learning new AI-related skills (especially those with college degrees) poll.qu.edu. Usage: 41% use AI tools like ChatGPT at least sometimes (16% very often) poll.qu.edu – usage is higher among younger adults and those with higher education (mirroring the “digital divide” in AI adoption) poll.qu.edu. Americans draw clear boundaries on AI applications: only 23–30% are comfortable with AI handling tasks like screening insurance claims, loan applications, or job applications (around 2/3 oppose these) poll.qu.edu, but a slight majority (53%) supports AI assisting police with facial recognition (42% oppose) poll.qu.edu. 86% are concerned about politicians using AI to spread misinformation poll.qu.edu, reflecting fears of AI in politics. |
YouGov (March 14–18, 2024) today.yougov.com today.yougov.com | YouGov Poll – 1,073 U.S. adult citizens (online) | The public mood is cautious: 54% feel “cautious” about AI, 49% “concerned,” 40% “skeptical.” Only 13% say they feel “excited” and 7% “trusting” – indicating caution is the top sentiment today.yougov.com today.yougov.com. 44% think it’s likely AI will become smarter than humans eventually, and 39% are at least somewhat concerned AI could cause the end of humanity (though 44% are not concerned about that extreme) today.yougov.com. Trust is low: 55% don’t trust AI to be unbiased, 62% don’t trust it to be ethical, and 45% don’t trust its accuracy today.yougov.com. When asked about impact on society, 42% expect AI’s societal effect to be negative vs. 27% positive (19% neutral) today.yougov.com. However, people are less pessimistic about personal impact: only 27% think AI will negatively affect them personally (25% expect a positive personal impact, 32% say neutral) today.yougov.com. 33% foresee a negative impact on the U.S. economy, while 27% foresee a positive economic impact today.yougov.com. Awareness: 88% know at least a little about AI (10% say “great deal” knowledge) today.yougov.com. Usage: 44% have used AI tools with any frequency (but only ~11% use them “often”) today.yougov.com. Top tools used in the past month: AI text generation (23%), chatbots (22%), voice assistants (15%), facial recognition e.g. phone unlock (15%) today.yougov.com. 31% say AI is making their life easier (versus 13% who say harder) today.yougov.com. Demographics: Adults under 45 are far more positive on AI – e.g. 36% of under-45s vs 19% of 45+ think AI will improve society today.yougov.com. Younger adults also trust AI more (49% of under-30s trust AI’s decisions vs <20% of seniors) and use it more (45% of under-30s use AI at least monthly) today.yougov.com. Men are slightly more optimistic about AI than women, but both report high levels of caution. Partisan differences are mild; for instance, Republicans are somewhat more likely to say AI’s impact will be neutral or positive than Democrats, but concern is prevalent across the spectrum. |
Sources: The above table draws from Pew Research Center pewresearch.org pewresearch.org, Gallup news.gallup.com news.gallup.com, Quinnipiac University Poll poll.qu.edu poll.qu.edu, YouGov today.yougov.com today.yougov.com, and other referenced surveys.
Overall Attitudes: Cautious Optimism vs. Concern #
Americans approach artificial intelligence with a mix of curiosity and caution. Surveys in late 2023 and 2024 show that “concern” far outweighs excitement about AI’s growing role in daily life. In a Pew Research Center poll, 52% of U.S. adults said they are more concerned than excited about AI in daily life, versus only 10% who are more excited (the remainder feel a mix of both) pewresearch.org. This marked an increase in worry from prior years, as illustrated in the chart below
pewresearch.org. Consistently, the dominant feeling Americans report toward AI is caution – in a March 2024 YouGov survey, 54% described their feelings as “cautious,” and nearly half (49%) said “concerned,” compared to 29% “curious” and 22% “scared” today.yougov.com. Only a minority express positive excitement or optimism untempered by worry. Globally, the United States ranks among the most skeptical nations about AI: an Ipsos poll found only 37% of Americans agree that AI’s products and services have more benefits than drawbacks, one of the lowest rates among 31 countries surveyed hai-production.s3.amazonaws.com.
Such caution stems from multiple worries. Many Americans fear AI could pose serious risks if unchecked – 39% are at least somewhat concerned that AI could “end the human race” (including 15% very concerned about this extreme scenario) today.yougov.com. A plurality (44%) even believes it’s likely AI will eventually become more intelligent than humans (22% say very likely), and 14% think AI is already smarter than people today.yougov.com. Even short of apocalyptic fears, people broadly sense that AI could do harm: in a January 2024 Pew survey, 51% of U.S. adults said they’re more concerned than excited about AI’s growing use (versus just 11% more excited) pewresearch.org. Majorities also worry about AI’s immediate downsides like bias, errors, or misuse, leading to low trust in the technology (discussed more below). Still, it’s not all gloom – many Americans acknowledge AI’s potential upsides in certain areas and younger generations especially exhibit more optimism. Overall, the public’s stance can be summarized as “cautious optimism” tilted heavily toward caution, with people demanding assurance that AI’s benefits will outweigh its risks.
Trust and Regulation: Low Confidence in Unchecked AI #
Public trust in AI systems is relatively low. Most Americans are not ready to hand important decisions over to algorithms without oversight. In the YouGov poll, 55% said they do not trust AI to make unbiased decisions (32% have no trust at all, 23% not too much trust) today.yougov.com. An even larger share, 62%, don’t trust AI to make ethical decisions today.yougov.com. And 45% lack trust in AI even to provide accurate information today.yougov.com – a striking figure given that one common use of AI (like chatbots) is answering questions. These findings highlight fears that AI may perpetuate biases or errors. Confidence is higher among younger adults: nearly half of adults under 30 expressed at least a fair amount of trust in AI’s decision-making abilities (e.g. 49% of under-30s trust AI to make unbiased decisions, vs. far fewer older adults) today.yougov.com today.yougov.com. But overall, skepticism prevails in the wider population.
Because of such concerns, Americans broadly support stronger oversight of AI. In Pew’s 2024 survey, more than half of U.S. adults (55%) said they want more control over how AI is used in their lives pewresearch.org. Likewise, both the public and AI experts worry that government regulation of AI will be too lax rather than too strict – about six-in-ten Americans are more concerned government will not go far enough in regulating AI (versus only a minority who worry it will go too far) pewresearch.org. This translates into a belief that institutions currently aren’t doing enough. An April 2025 Quinnipiac University poll found 69% of Americans think the U.S. government is not doing enough to regulate AI, compared to only 15% who feel the government is doing enough poll.qu.edu. Similarly, 73% say tech businesses are not being transparent enough about their use of AI poll.qu.edu. The public effectively demands greater transparency and accountability around AI.
At the same time, there is a “trust gap” in who should manage AI: people do not place high confidence in either government or industry alone. Pew found 62% of Americans have little or no confidence in the federal government to regulate AI effectively pewresearch.org, and 59% have low confidence in companies to develop AI responsibly pewresearch.org. Gallup reported a similar result: 77% of U.S. adults said they do not trust businesses much or at all to use AI responsibly news.gallup.com. This paradox – strong support for AI regulation, but low trust in those who would implement it – suggests the public sees the need for oversight but is wary of both Big Tech and government competence. Indeed, large-scale analyses of public opinion conclude that while there is broad support for AI governance, people doubt that either tech companies or government alone will get it right brookings.edu. This may explain why many favor collaborative or transparent approaches (for example, 57% of Americans say companies should be more transparent about their AI use as the top way to reduce public concern news.gallup.com). In short, Americans want AI kept on a tight leash and prefer checks and balances – they’re looking for leadership on AI that they can trust, and they haven’t yet found it.
AI Awareness and Usage #
Public awareness of AI is high and rising, and a significant minority of Americans are now using AI tools in everyday life. Pew reported that 90% of Americans had heard at least a little about artificial intelligence as of mid-2023 pewresearch.org, and the share who feel they know a “great deal” about AI was about 12% in early 2025 (Quinnipiac poll) poll.qu.edu. Self-rated knowledge varies greatly by education and generation: for example, only 5% of Baby Boomers say they know a great deal about AI, versus 17% of Gen Z adults poll.qu.edu. Young adults are much more likely to be familiar with AI – in one survey, 20% of under-30s said they know a great deal about AI, compared to just 7% of those 30 and older today.yougov.com. Men also tend to report higher AI familiarity than women pewresearch.org. This “digital literacy” gap means younger, more educated groups may be engaging with AI more confidently, while others feel left behind or unsure.
In terms of adoption, multiple polls show around 4 in 10 Americans have experimented with AI tools (chatbots, image generators, voice assistants, etc.). Quinnipiac found 41% of adults use AI tools like ChatGPT, Google Bard, or Microsoft Copilot at least sometimes (16% “very often” and 25% “sometimes”), while 59% rarely or never do poll.qu.edu. Similarly, YouGov found 44% of Americans say they use AI tools with some frequency (though many only “rarely”), and an additional 20% weren’t sure if they use AI or not today.yougov.com. This uncertainty reflects that some people use AI-powered features (e.g. voice assistants or recommendation algorithms) without realizing it. Indeed, when asked about specific technologies, the most commonly used AI applications are fairly mundane: search/chat functions and productivity aids. According to Ipsos and YouGov data, the top use cases among U.S. AI users include: searching for information (by far #1), text generation (used by ~23% in the past month), conversational chatbots (22%), and virtual assistants like Siri/Alexa (15%) today.yougov.com ipsos.com. Creative uses are less common – about 10–16% have tried AI image generation recently today.yougov.com poll.qu.edu – and only a small fraction (4%) have interacted with physical robots today.yougov.com.
Notably, frequent or intensive AI use is still rare. Only about 1 in 10 Americans use AI tools “often,” per Ipsos tracking ipsos.com. Regular usage is concentrated in certain groups: younger adults again stand out, with 45% of under-30 Americans using AI at least once a month (versus far lower rates among seniors) today.yougov.com. People with higher education and tech-related jobs also report more frequent use poll.qu.edu poll.qu.edu. According to Quinnipiac, employed Americans with a 4-year college degree are roughly twice as likely as those without a degree to be learning and using AI in their work (55% of college grads are learning new AI skills vs. 27% of non-grads) poll.qu.edu poll.qu.edu. In the workforce, about 32% use AI tools at least “sometimes” for their job, while ~47% never do poll.qu.edu. This points to an emerging digital divide in AI adoption: those with more resources and tech-savvy are embracing AI tools, whereas others have yet to integrate them.
Despite broad wariness, many Americans acknowledge personal benefits from AI in day-to-day tasks. In YouGov’s study, 31% said AI is making their life easier, more than double the 13% who said AI makes life harder (most others said it’s not making a noticeable difference) today.yougov.com. Convenience enhancements – like quicker information access, automation of simple tasks, or help with writing and research – are being felt by a significant minority of users. Younger adults especially see the upside: 46% of Americans under 45 say AI is making their lives easier, compared to just 18% of those 45 and older today.yougov.com today.yougov.com. This generational gap underscores that as people grow up with AI-powered tools, they may become more comfortable and reliant on them. Still, a plurality (36%) say AI hasn’t made their life distinctly easier or harder so far today.yougov.com, suggesting that for many Americans, the impact of AI is still limited or ambiguous in their personal lives. As AI systems become more capable and visible (from customer service chatbots to auto-complete in email), awareness and usage will likely continue to rise, and public attitudes may evolve from today’s tentative trial stage.
Perceived Impact on Society, Jobs, and the Future #
When asked about AI’s broader impact on society and the economy, Americans express guarded expectations – leaning pessimistic about the near term. In the YouGov survey, a plurality (42%) believe AI’s overall effect on society will be negative in the foreseeable future, while only 27% think it will be positive (about 19% say “neither” and the rest are unsure) today.yougov.com. This represents a shift toward skepticism: a year earlier (June 2023) only 35% said AI was having a negative effect on society, so negative sentiment rose to 42% by early 2024 today.yougov.com. Interestingly, people are less worried about AI’s personal impact on themselves – only 27% think AI will negatively affect their own life, and a similar 25% think it will affect them positively, with the remainder predicting no major personal effect today.yougov.com. In other words, Americans tend to say “AI might be bad for society at large, but I’m not too worried about my life.” This could reflect a feeling that the average person will be buffered from AI’s worst harms, or simply uncertainty about what it means for one’s own situation. But when it comes to society, the economy, and especially jobs, concern is high that AI could disrupt things in a harmful way.
Jobs and automation fears are a recurring theme. Multiple polls show that a majority of Americans expect AI to eliminate more jobs than it creates. Gallup’s 2024 Bentley University/Gallup survey found 75% of U.S. adults believe AI will reduce the total number of jobs in the country over the next 10 years news.gallup.com. Quinnipiac likewise reported that 56% say advancements in AI are likely to lead to a decrease in job opportunities for people, versus only 13% who expect an increase (24% think it won’t make much difference) poll.qu.edu. The prevailing public view is that AI = automation, and automation will displace workers. In fact, worry about AI-driven job losses has increased somewhat – Gallup noted the percentage seeing AI as “more harmful than helpful” (which includes job impacts) declined slightly from 2023 to 2024, but it’s still a common concern news.gallup.com. The paradox is that while Americans fear AI will shrink employment in general, most do not fear for their own jobs. In the Quinnipiac poll of employed adults, only 21% said they are at least somewhat concerned that AI may make their own job obsolete (just 6% very concerned), whereas 78% are “not so concerned” or not concerned at all about losing their job to AI poll.qu.edu. This aligns with Pew findings that people worry more about automation’s impact on the workforce broadly than on their personal employment brookings.edu. It suggests an optimism bias – other people might be laid off by AI, but individuals tend to feel their own roles are safe (perhaps due to confidence in unique human skills or simply because job loss feels remote until it happens).
Economic outlook: Americans are split on AI’s economic upsides. Only about one-quarter (27%) think AI will have a positive effect on the U.S. economy, while one-third (33%) expect a negative economic effect and the rest anticipate a neutral or mixed impact today.yougov.com. Similarly, just 23% of U.S. adults told Pew that AI will positively impact how people do their jobs in the next 20 years, whereas 35% believe the impact on jobs will be mostly negative or disruptive pewresearch.org. In Pew’s study, a large share (roughly 40%) admitted they aren’t sure what AI’s economic impact will be pewresearch.org, highlighting uncertainty about such a complex issue. On the brighter side, Americans do see some societal benefits in specific domains: for instance, medical and healthcare innovations are a noted positive (discussed in the next section). In Quinnipiac’s April 2025 poll, 59% of Americans thought AI will do more good than harm for medical advances, whereas only 24% thought it will do more harm in medicine poll.qu.edu. This suggests people can envision AI’s value in curing diseases, improving diagnostics, or personalizing treatment. However, education is an area of worry – 54% said AI will do more harm than good to the education system (only 32% expect more good than harm in education) poll.qu.edu, likely reflecting fears of cheating, diminished learning, or misinformation affecting students. In day-to-day life generally, Americans tilt negative but not overwhelmingly so: about 44% think AI will do more harm than good in their daily life, while 38% think it will do more good poll.qu.edu (the rest have no opinion). These nuanced views show that public opinion isn’t monolithically anti-AI; rather, people weigh context. AI in medicine is welcomed by many, AI in classrooms or the workplace is met with more skepticism.
Underpinning many concerns about AI’s societal impact is the issue of misinformation and misuse. An overwhelming 86% of Americans are concerned that political leaders will use AI to spread fake or misleading information poll.qu.edu. This Quinnipiac finding speaks to the current climate of anxiety about deepfakes, AI-generated propaganda, and election interference. A majority (63%) are very concerned about this specific threat poll.qu.edu. Likewise, a Harvard-Harris poll in 2024 noted large shares of the public fear AI could be used to deceive voters or manipulate public opinion brookings.edu. The public also worries about AI’s effect on young people’s development: Quinnipiac found 83% of Americans are concerned that AI tools will diminish the youngest generation’s ability to think for themselves (including over half who are *very concerned about this) poll.qu.edu. Parents and educators have voiced alarm about students relying on AI (like ChatGPT) to do their homework, potentially weakening learning and critical thinking. Concern on this point is extremely high among women (86% of women vs. 79% of men are concerned) and is consistent across age groups – even 83% of Gen Z respondents themselves expressed concern about AI’s impact on the youngest kids’ independent thinking poll.qu.edu poll.qu.edu. Such findings show that cultural and generational values (like the importance of human creativity, originality, and effort) shape opinions on AI alongside practical economic considerations.
In summary, Americans see major disruptions on the horizon due to AI – particularly for jobs, information integrity, and the social fabric. They anticipate job losses, challenges to education, and new dangers in politics. At the same time, they recognize potential breakthroughs in areas like healthcare and are hopeful that in certain ways AI could improve lives. This ambivalence is captured well by a recent Brookings analysis of public opinion: overall, the U.S. public is more concerned than optimistic about AI’s impacts, yet views are multifaceted and sometimes contradictory – for example, people may welcome AI’s benefits in one domain even as they fear its consequences in another brookings.edu.
Views on Specific AI Applications #
Public opinion often depends on how AI is being used. Surveys since 2024 reveal stark differences in support or concern for different applications of AI – Americans draw lines between scenarios they consider acceptable and those that make them uneasy. Generally, people are more comfortable with AI when it’s used to augment high-stakes decisions (under human oversight) or to improve public safety, and least comfortable when AI might replace human judgment in personal, consequential matters like hiring, lending, or medical advice. Below is a breakdown of attitudes toward key AI applications:
-
Facial Recognition & Policing: When it comes to law enforcement, Americans show relatively higher approval of AI. In the Quinnipiac poll, 53% of adults said they would be comfortable with police using an AI tool for suspect identification (e.g. facial recognition), while 42% would not poll.qu.edu. This majority support likely stems from viewing AI as a means to enhance public safety and catch criminals. Indeed, experts noted that respondents seem more open to AI “protecting society” in policing than to AI influencing personal outcomes poll.qu.edu poll.qu.edu. Political affiliation plays a role here: 66% of Republicans support police use of facial recognition, compared to 45% of Democrats (Democrats were roughly split, with 49% against) poll.qu.edu. Younger adults (Gen Z) were the most comfortable (62% in favor) while Millennials were more hesitant (only 46% in favor) poll.qu.edu. These differences aside, policing stands out as one domain where AI has a social mandate – many Americans are okay with surveillance technologies like face recognition when used to solve crimes, though privacy advocates often caution about the risks. (Notably, in earlier Pew research (2022–23), support for facial recognition was high for law enforcement purposes but very low for corporate or advertiser use – a pattern consistent with the prioritization of security over privacy in public opinion.)
-
Hiring and Workplace Decisions: Strong opposition emerges to AI making hiring or personnel decisions. Gallup found 85% of Americans are concerned about companies using AI for hiring decisions (a full 55% “very concerned”) news.gallup.com. Quinnipiac similarly reported that only 30% of Americans would be comfortable with an AI screening job applications, whereas 64% are not comfortable letting AI decide who gets a job interview poll.qu.edu. The message is clear: people want humans in the loop when it comes to employment. A hiring decision can shape someone’s livelihood, and the public lacks confidence that AI can evaluate candidates fairly and holistically. This ties back to the trust issue – over half think AI cannot be unbiased today.yougov.com, so an AI acting as an HR gatekeeper is widely seen as unacceptable. Even among younger, tech-savvy groups, there isn’t majority support for AI-led hiring (only ~40% of under-30s said they’d trust AI’s fairness in such cases, according to crosstabs). Americans appear to draw a bright line: AI can assist, but should not replace, human judgment in making hiring/firing decisions.
-
Finance (Loans and Insurance): In areas like banking and insurance, public comfort with AI is likewise low. Quinnipiac asked about AI reviewing loan applications, and just 27% were comfortable with banks using AI to screen loans while 67% were not poll.qu.edu. For health insurance claims, only 23% approved of AI systems screening claims, versus 71% opposed poll.qu.edu. Most people do not want algorithms determining their access to credit or coverage – likely out of fear that automated systems might be error-prone or unfair, and lacking the empathy or flexibility a human might have in individual circumstances. Interestingly, in that poll a majority did accept one administrative use: 51% were okay with AI automating traffic enforcement (like ticketing) via cameras, suggesting that when an AI application is seen as relatively straightforward and impersonal (a traffic violation is either clear-cut or not), they mind it less poll.qu.edu. But anything touching someone’s financial livelihood (loans, insurance, hiring) triggers resistance. Consistently across surveys, Americans say “no” to AI making important life-affecting decisions about individuals – they don’t want to be evaluated or managed by a black-box algorithm, preferring a human decision-maker even if imperfect.
-
Healthcare and Medicine: Attitudes toward AI in healthcare are mixed and highly context-dependent. On one hand, most Americans are optimistic about AI’s potential for medical innovation – as noted, 59% believe AI will bring more good than harm to medical advances (e.g. helping find cures, improving diagnostics) poll.qu.edu. There’s general openness to AI as a tool for doctors: earlier surveys show broad support for AI that can detect diseases on scans or aid in treatment planning. However, trust drops when AI is in charge without a human. Gallup found 80% are at least somewhat concerned about AI being used to recommend medical advice or treatments on its own news.gallup.com. People worry about AI misdiagnosis or errors lacking human oversight. And as mentioned, there’s deep skepticism of AI in health insurance administration (where it might deny claims). Another nuance: a 2024 AP-NORC study (noted in media) indicated that while many patients are fine with AI reading X-rays or monitoring vitals, a majority want their human doctors to have the final say. So, Americans see AI as an assistant, not a replacement, in healthcare. They appreciate AI’s analytical power – for example, identifying patterns in scans or data – but still value human empathy, intuition, and accountability in medical decisions. Overall, the public is cautiously positive about AI in medicine when it augments what trained health professionals do (e.g. an AI that helps catch cancers earlier is welcomed), but they remain wary of fully autonomous AI healthcare.
-
Autonomous Vehicles: Self-driving cars and AI “driving” is another application met with public concern. According to Gallup, 83% of Americans are concerned about AI being used to drive vehicles news.gallup.com. High-profile accidents and uncertainties around liability likely fuel this wariness. Although companies have rolled out autonomous taxis in some cities, the average American isn’t ready to trust an AI at the wheel with no human backup. Surveys by Pew in 2023 similarly found a large majority would not want to ride in a driverless car and oppose widespread adoption of autonomous trucks on the roads washingtonpost.com. So while in theory AI-driven transport could reduce accidents in the long run, current public sentiment is that the technology isn’t mature enough – skepticism and safety fears dominate perceptions of driverless vehicles. This may shift if people gain more exposure to safe self-driving systems, but as of 2024, autonomy in transportation is viewed anxiously. (By contrast, lower-level AI driving aids like lane assist or smart cruise control are fairly accepted, since the human driver remains in control.)
-
Generative AI (ChatGPT and content creation): The advent of ChatGPT and similar AI content generators in late 2022 kicked off widespread public intrigue – and concern – about this technology. By mid-2024, awareness of generative AI was high: about 60% of Americans had heard of ChatGPT within months of its release monmouth.edu, and usage climbed such that roughly one-third of adults had tried it at least once by 2024 (especially for information lookup or help with writing) ipsos.com today.yougov.com. Polls find people have conflicted views on generative AI. Many appreciate its usefulness – for example, among those who use AI, 69% have used it to search for information, making this the most popular use-case ipsos.com. People also use chatbots to draft emails, brainstorm, or for entertainment. In Quinnipiac’s poll, 37% said they’ve used AI tools for researching topics, 24% for school/work projects, and around 16–18% for creative tasks like writing emails or making images poll.qu.edu. At the same time, there is fear of generative AI’s power to mislead. A Pew study found 70%+ of Americans are worried about AI-generated misinformation in elections, and in Monmouth’s 2023 poll 78% said AI-written news articles would be a bad thing for society monmouth.edu. Essentially, people are amazed by tools like ChatGPT, but also don’t fully trust the content those tools produce. For instance, a majority believe students will use AI to cheat on assignments monmouth.edu. In YouGov’s survey, 45% of Americans said they do not trust AI to provide accurate information today.yougov.com – reflecting well-known issues of AI chatbots “hallucinating” false answers. There’s also an emerging cultural debate about AI-generated art and media: an Ipsos poll found 35% of Americans dismiss AI-generated art as “not real art,” and words like “fake” and “controversial” have increasingly been used to describe AI media ipsos.com. This indicates a growing skepticism about the authenticity and value of AI-created content in creative fields.
In summary, generative AI draws both enthusiasm and concern. People enjoy the convenience (it can save time and spark creativity), but they worry about authenticity and ethics (plagiarism, misinformation, job displacement for writers/artists). Demographically, younger adults are far more open to generative AI. In one YouGov result, 36% of under-45 Americans said AI will have a positive impact on society, vs. only 19% of those 45+, likely because younger people see tools like ChatGPT as helpful and normal today.yougov.com. However, even many younger users exercise caution and critically evaluate AI outputs. As generative AI becomes more integrated (in search engines, office software, etc.), the public’s comfort level may increase – but as of 2024, the average American is intrigued but wary, using ChatGPT for fun or minor tasks but hesitant to fully trust its information or let it replace human content creators.
-
Surveillance and Privacy: Beyond the specific examples above, a broader concern persists about AI-enabled surveillance. While not always polled explicitly in these terms, it underlies responses to things like facial recognition and deepfakes. Americans generally accept surveillance for security (e.g., AI security cameras), but oppose surveillance that feels intrusive or unjust. For example, Pew found a large majority oppose employers monitoring their workers with AI or using facial recognition to track attendance – such uses are seen as overreach. Privacy is a significant anxiety: in an OpenAI/YouGov poll (2023), over two-thirds of Americans agreed that AI could pose a threat to privacy and civil liberties if not carefully managed. The high concern about government misuse of AI (spreading propaganda, etc.) also relates to fears of authoritarian surveillance. So, support for “AI surveillance” is context-specific – it’s high for catching criminals or fraud, and low for monitoring ordinary citizens. The public wants clear limits on how AI can collect and use personal data. Notably, only 18% of Americans trust tech companies to safeguard their personal AI data (from an Atlantic/Ipsos tech poll in 2023), illustrating the privacy trust deficit. This is an area where policy could significantly influence public sentiment: strong privacy protections and transparency requirements might alleviate some surveillance fears.
Overall, these attitudes highlight a consistent theme: Americans favor AI in roles that assist humans or provide public benefits, but resist AI in roles that displace human judgment or threaten individual rights. They are saying “yes” to AI as a tool – whether helping doctors, aiding police, or powering helpful gadgets – and “no” to AI as an arbiter of human fates (jobs, loans, legal decisions) or as an unchecked content creator that could deceive or devalue human work. These nuanced opinions will be crucial for policymakers and companies to consider as they integrate AI into various sectors.
Demographic Differences in AI Attitudes #
Public opinion on AI varies notably across demographic groups. Age is one of the clearest dividing lines. Younger Americans (Millennials and Gen Z) tend to be more familiar with AI, more optimistic about its benefits, and more trusting of its use, while older Americans are more skeptical and fearful. For example, as noted earlier, adults under 45 are twice as likely as those over 45 to believe AI will have a positive impact on society (36% vs. 19%) today.yougov.com, and under-45s are also far more likely to say AI is making their own life easier (46% vs 18%) today.yougov.com today.yougov.com. Younger cohorts are more comfortable using AI tools (45% of under-30s use AI at least monthly today.yougov.com) and more trusting of AI’s decisions – nearly half of under-30 adults trust AI to provide accurate info, whereas most over-65 adults do not. Older Americans, by contrast, express greater concern about AI’s risks: for instance, Boomers overwhelmingly feel AI will do more harm than good in daily life, and the Silent Generation (ages ~80+) reports the lowest levels of AI understanding and usage poll.qu.edu poll.qu.edu. That said, even young people have significant reservations (a majority are concerned about issues like AI’s effect on jobs and misinformation), but the balance of hope vs. fear is more favorable among youth.
Education and income also correlate with AI views. More educated Americans are generally more aware of AI and somewhat more positive about its potential – but also attuned to its risks. Pew data shows college graduates are much more likely to have heard a lot about AI and to understand it better pewresearch.org. In the workforce, those with college degrees are embracing AI tools faster (learning new AI skills at roughly double the rate of non-degree holders) poll.qu.edu poll.qu.edu. Additionally, higher-income individuals tend to foresee more benefits from AI: Quinnipiac found a striking split where 60% of Americans earning $200k+ per year said AI will do more good than harm in their daily life, whereas 59% of those earning under $50k said it will do more harm poll.qu.edu. Affluent and highly educated respondents may be more optimistic because they expect to leverage AI to their advantage (and have less fear of job displacement). In contrast, lower-income and less-educated Americans often voice greater anxiety about AI, perhaps feeling more vulnerable to automation or less empowered to benefit from tech advancements. However, it’s not a simple linear relationship – education also brings awareness of AI’s ethical pitfalls, so educated groups support regulation and responsible AI strongly. The key takeaway is that opportunity shapes perspective: those who see AI as an opportunity (often the young, educated, and high-earning) lean more positive, while those who see it as a threat to jobs or privacy (often older, lower-income) lean more negative.
Gender differences emerge as well. Surveys consistently find that women are more wary of AI than men. Pew notes that women are less likely than men to expect AI’s impact on the U.S. will be positive (only 12% of women vs 22% of men in one 2024 poll) pewresearch.org. Women also report lower personal familiarity with AI on average poll.qu.edu poll.qu.edu. In the Quinnipiac poll, women expressed higher levels of concern about AI harming the youngest generation (86% of women were concerned vs. 79% of men) poll.qu.edu. Similarly, YouGov data showed men more often choose optimistic descriptors for AI, whereas women more frequently choose words like “concerned” or “scared.” These differences might be related to disparities in tech industry representation, as well as sociocultural factors – men, on average, might have more exposure to AI through tech jobs or hobbies and thus more confidence in it, whereas women may be more sensitive to potential social and ethical issues. Still, it’s important to note that majorities of both genders have significant concerns; the gap is one of degree (for instance, both men and women overwhelmingly support more AI regulation, with women slightly more in favor) pewresearch.org.
Political affiliation also colors AI opinions in certain areas. AI itself is not yet a sharply partisan issue (there is broad bipartisan agreement on many concerns), but subtle differences exist. Republicans are a bit more likely to downplay risks and oppose regulation, whereas Democrats tend to express more concern about harms and desire for oversight. For example, Pew found Democrats are somewhat more likely than Republicans to worry government won’t regulate AI enough (64% vs 55%) pewresearch.org, indicating stronger support for regulation on the left. On specific uses like policing, Republicans are far more supportive (as noted, 66% of GOP vs 45% of Democrats back facial recognition for police) poll.qu.edu. Republicans also report slightly less personal concern about AI: a 2023 Pew survey showed Republican-identifiers were less likely to say they are “more concerned than excited” about AI compared to Democrats (though concern was high in both groups) brookings.edu. This could tie into broader trust in institutions – Democrats might worry about corporate misuse and bias, while Republicans might worry about government overreach or tech elitism. Interestingly, on the question of AI’s impact on jobs, Democrats were more likely than Republicans to anticipate job losses (65% of Democrats vs 51% of Republicans in Quinnipiac said AI would decrease jobs) poll.qu.edu, possibly reflecting differing trust in technology-driven growth. However, these partisan gaps are relatively modest right now. Both conservatives and liberals express caution about AI’s unchecked growth. It’s an issue that cuts across typical party lines, at least so far. Geographic region has not shown dramatic differences in polls (urban and rural respondents both voice AI anxieties, though urban folks may have slightly more exposure to AI applications like self-driving cars or delivery bots).
In summary, younger, higher-educated, and higher-income Americans are more inclined to see AI as an opportunity and to use it, whereas older, less affluent, and female Americans tend to emphasize AI’s risks and unknowns. Political leanings nudge certain views but are not dominant predictors on most AI questions. These demographic patterns suggest that experience and empowerment with technology breed more positive attitudes – those who feel prepared to adapt with AI or shape its use are less afraid of it. As AI literacy spreads and the technology becomes more commonplace, some of the gaps in perception may narrow. But for now, understanding these group differences is important for tailoring communication about AI: for instance, public education efforts might target older and less tech-savvy populations to alleviate fears, while industry transparency efforts might focus on earning the trust of women and others most skeptical of corporate AI practices.
Conclusion #
In the span of 2024–2025, polling reveals a U.S. public that is highly aware of AI’s emergence and guarded in its expectations. Americans, on the whole, see significant risks – from job displacement to erosion of trust in information – and this drives a predominant mood of concern and calls for oversight. Yet amid the caution, there are notes of optimism: many people can point to ways AI is helping in their lives or could improve fields like medicine. Demographic splits show that those who interact with AI more (young, educated) tend to be more hopeful and accommodating of the technology, whereas those less directly engaged are more fearful of its abstract threats. Importantly, across the board there is a desire for a human-centered approach to AI – the public wants AI’s growth to be managed responsibly, with transparency, regulation, and retention of human judgment where it matters. They are not rejecting AI outright; rather, Americans are “drawing lines” (as Quinnipiac analysts put it) about where AI is appropriate and where it should be curtailed poll.qu.edu poll.qu.edu. These nuanced opinions suggest that if policymakers and companies address the public’s concerns – ensuring AI is fair, safe, and under control – then acceptance of AI’s benefits will grow. For now, however, the prevailing sentiment is cautious vigilance: people see AI’s promise, but they are watching closely to see if it fulfills that promise or endangers the values and opportunities they care about. The data from 2024 onward paints a clear picture: Americans’ relationship with AI is evolving, but it will demand earned trust and tangible proofs of benefit for optimism to catch up with concern brookings.edu.
Ipsos https://www.ipsos.com/en-us/artificial-intelligence-key-insights-data-and-tables Reuters https://www.reuters.com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/. Axios https://www.axios.com/2023/08/08/ai-divides-america-polls-chatgpt. https://www.axios.com/2023/11/07/ai-regulation-chat-gpt-us-politics-poll Pew https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/.
Edelman’s latest research found that 72% of people in China trust AI, compared with just 32% in the United States.
- Not only is trust higher in China, it’s higher in much of the developing world than it is in the United States, according to Edelman’s research.
- Trust in AI was highest in India, at 77%, followed by Nigeria at 76%, Thailand at 73% and then China.
- Only six of the surveyed countries ranked lower than the U.S. in their trust in the new technology: Canada (30%), Germany (29%), the Netherlands (29%), United Kingdom (28%), Australia (25%) and Ireland (24%).
- Globally, 52% of men said they trusted AI vs. 46% of women, with younger people significantly more trusting of the technology than older folks.
- In the U.S., AI was trusted more by Democrats (38%) than Republicans (34%) or independents (23%).
- Higher-income respondents were also more trusting (51%) than those with middle (45%) or low (36%) incomes.