Welcome to Tech Policy Hub!

Welcome! #

Welcome to Tech Policy Hub, the comprehensive resource designed to illuminate the complex world of technology legislation. In an era where digital advancements rapidly reshape our society, understanding the policies that govern innovation, privacy, and digital rights is no longer reserved for the experts.

Eveything on this website was selected for being high quality analysis. Think of this as Wikipedia for tech and innovation policy.

Whether you’re a concerned citizen, a tech enthusiast, a policy professional, or a policy makers, Tech Policy Hub offers you a deep dive into the latest developments, debates, and legislation at both state and federal levels. Our mission? To ensure you’re not just keeping up but also comprehending the implications and the road ahead.

This site is lovingly maintained by Will Rinehart, Senior Research Fellow at the CGO.

Artificial intelligence #

AAF’s Artificial Intelligence (AI) Policy Tracker is a tool to help navigate federal AI legislation. The tracker divides bills into five categories based on their intent and provides a short summary of each bill, detailing how it will be implemented, mechanisms to enforce the law, and notes on the bill’s construction and potential impact if enacted. This tracker is updated bi-weekly based on information available on Congress.gov.

Adam Theier’s review of AI legislation.

The National Telecommunications and Information Administration (NTIA) recently put out a proceeding to better understand how commercial entities data collect and use data. Importantly, it was seeking to understand how “specific data collection and use practices potentially create or reinforce discriminatory obstacles for marginalized groups regarding access to key opportunities, such as employment, housing, education, healthcare, and access to credit.” What the NTIA seeks to tackle is a wicked problem in Rittel and Webber’s classic definition.

The public interest comments I filed argued for a twist on that theme.

Wicked problems, which plague public policy and planning are distinct from natural problems because “natural problems are definable and separable and may have solutions that are findable (while) the problems of governmental planning and especially those of social or policy planning are ill-defined.” But the case of fairness in AI shows that they are over-defined. The reason why “social problems are never solved,” they “are only resolved-over and over again” is because there are many possible solutions.

When the NTIA issues its final report, it should resist the tendency to reduce wicked problems into natural ones. Rather, the agency should recognize, as one report described it, the existence of a hidden universe of uncertainty about AI models.

To address this problem holistically:

  • The first section explains how data-generating processes can create legibility but never solve the problem of illegibility.
  • The second section explains what is meant by bias, breaks down the problems in model selection, and walks through the problem of defining fairness.
  • The third section explores why people have a distaste for the kind of moral calculations made by machines and why we should focus on impact.

Will AI steal jobs? #

Understanding Job Loss Predictions From Artificial Intelligence – Worries about artificial intelligence (AI) tend to emanate from concerns about the impact of the new technology on work. Many fear that automation will destabilize labor markets, depress wage growth, and lead to long-term secular decline in the labor market and economy as a whole. Policymakers should know that (1) similar models charting AI job loss can result in widely different job prediction losses; (2) most AI job loss predictions aren’t compared against current economic baselines; and (3) implementing AI-based systems isn’t costless and is likely to take some time to accomplish. Not only is there a lack of consensus on the best way to model AI-based labor changes, but more important, there is no consensus as to the best policy path to help us prepare for these changes.

Tracing the impact of automation on workers and firms – August 14, 2020 – The Benchmark – Automation will be a slow process in many sectors. Instead, productivity data is uneven. Firms are reluctant to change, and only some industries seem to be affected by robotics or other automation methods.

Is AI biased? #

Understanding Job Loss Predictions From Artificial Intelligence – Worries about artificial intelligence (AI) tend to emanate from concerns about the impact of the new technology on work. Many fear that automation will destabilize labor markets, depress wage growth, and lead to long-term secular decline in the labor market and economy as a whole. Policymakers should know that (1) similar models charting AI job loss can result in widely different job prediction losses; (2) most AI job loss predictions aren’t compared against current economic baselines; and (3) implementing AI-based systems isn’t costless and is likely to take some time to accomplish. Not only is there a lack of consensus on the best way to model AI-based labor changes, but more important, there is no consensus as to the best policy path to help us prepare for these changes.

Mandating AI Fairness May Come At The Expense Of Other Types of Fairness – In 2016, ProPublica sparked a conversation over the use of risk assessment algorithms when they concluded that a widely used “score proved remarkably unreliable in forecasting violent crime” in Florida. Their examination of the racial disparities in scoring has been cited countless times, often as a proxy for the power of automation and algorithms in daily life. As this examination continues, two precepts are worth keeping in mind. First, the social significance of algorithms needs to be considered, not just their internal model significance. While the accuracy of algorithms are important, more emphasis should be placed on how they are used within institutional settings. And second, fairness is not a single idea. Mandates for certain kinds of fairness could come at the expense of others forms of fairness. As always, policymakers need to be cognizant of the tradeoffs.

An AI Innovation Agenda – The AI race is on. Countries, entrepreneurs, and firms across the globe are jockeying to capitalize on artificial intelligence (AI). While it has received much negative publicity, AI creates many opportunities for the economy and country. To ensure that the United States can benefit from the possibilities of these new technologies, policymakers should work toward implementing this AI agenda.

Colorado has become the first state to enact a law regulating artificial intelligence systems. The new law, signed into law on Friday by Governor Jared Polis, requires developers of “high-risk” AI systems to take reasonable measures to avoid algorithmic discrimination. It also mandates that developers disclose information about these systems to regulators and the public, as well as conduct impact assessments.