The Common Good Party — Policy Document Series — Issue 36

AI & Technology

Innovation With Guardrails

AI is rewriting the economy faster than any technology in human history. The global AI market is projected to reach $1.8 trillion by 2030. Six companies control over 90% of frontier AI development. Algorithms already decide who gets hired, who gets a loan, who gets parole, and what information reaches voters. The United States has zero federal laws regulating any of it. We can govern this technology — or be governed by the companies that build it.

$1.8T Projected global AI market by 2030 — with zero US federal regulation governing its development or deployment
0 Federal laws regulating AI in the United States — the EU enacted the world’s first comprehensive AI law in August 2024
63% Of Americans who say AI needs more regulation — bipartisan concern, zero congressional action
85M Jobs projected to be displaced by AI by 2025 — 97M new roles created, but in different industries and skill levels

Contents

  1. 1 Executive Summary
  2. 2 The Problem
  3. 3 How We Got Here
  4. 4 What Other Countries Do
  5. 5 Our Policy — Seven Pillars
  6. 6 How We Pay For It
  7. 7 Implementation Timeline
  8. 8 Addressing Counterarguments
  9. 9 Citations
Section 01

Executive Summary

The United States is home to every major frontier AI company on Earth — and has not passed a single federal law governing the technology. The country building AI is the only major democracy not regulating it.

The global AI market is projected to reach $1.8 trillion by 2030, growing at roughly 37% annually. Six companies — Google, Microsoft, Meta, Amazon, OpenAI, and Anthropic — control over 90% of frontier AI model development. AI systems are already making high-stakes decisions about who gets hired, who qualifies for a loan, who is denied housing, who gets paroled, and what information reaches voters. There is no federal law requiring transparency, audits, or accountability for any of these systems. The EU passed its comprehensive AI Act in August 2024. Canada proposed criminal penalties for reckless AI deployment. Brazil blocked Meta from training AI on user data. The United States has done nothing.

Seven pillars of the AI & Technology agenda: (1) American AI Accountability Act — four-tier risk framework banning social scoring and mass biometric surveillance; (2) Algorithmic bias audits — mandatory for hiring, lending, housing, and criminal justice AI; (3) Deepfake and election integrity protections — federal ban on AI-generated candidate impersonation; (4) Data rights in AI training — opt-in consent and compensation; (5) AI worker transition — $20B fund over 10 years; (6) Big Tech AI antitrust — structural separation and open-model requirements; (7) AI in government — mandatory disclosure, human review, and due process.

The question is not whether AI will be regulated. The question is whether it will be regulated by democratic governments accountable to citizens — or by corporate boardrooms accountable to shareholders. The Common Good Party chooses democracy. We support innovation — with transparency, accountability, and democratic oversight. This paper lays out a comprehensive framework modeled on international best practice and grounded in the principle that the most powerful technology in human history should be governed by the public, not by six companies. See Issue #21 (Internet & Privacy), Issue #30 (Media & Press Freedom), Issue #20 (Corporate Power), and Issue #13 (Labor).

Section 02

The Problem

Five interlocking failures: a total regulatory vacuum, documented algorithmic discrimination, deepfake threats to elections, massive worker displacement with no transition plan, and unchecked government deployment of AI in high-stakes decisions.

The Regulatory Vacuum
The global AI market is projected to reach $1.8 trillion by 2030. Six companies control over 90% of frontier AI model development. The concentration is unprecedented: the entire trajectory of a technology that will reshape every sector of the economy is being determined by a handful of corporate boardrooms with zero democratic oversight. The EU enacted the world’s first comprehensive AI law in August 2024. The United States has none.
Algorithmic Discrimination
Amazon’s hiring AI penalized résumés containing the word “women’s.” The COMPAS recidivism algorithm has a 45% false positive rate for Black defendants versus 23% for white defendants. HUD settled with Facebook over AI-driven housing ad discrimination. AI-powered tenant screening tools have denied housing based on flawed data. There is no federal law requiring audits, transparency, or accountability for any of these systems. See Issue #21 — Internet Privacy.
Deepfakes and Election Integrity
AI-generated robocalls impersonated President Biden in the 2024 New Hampshire primary. Deepfake technology can produce indistinguishable video of any public figure saying anything. Only a handful of states have passed deepfake election laws; there is no federal prohibition. The FCC has ruled AI-generated voice calls illegal under existing robocall law, but enforcement infrastructure does not exist. See Issue #30 — Media & Press Freedom.
Worker Displacement Without a Plan
The World Economic Forum projects 85 million jobs displaced by AI by 2025 and 97 million new roles created — but the displaced workers and the new roles are in different industries, different skill levels, and different geographies. There is no federal program for AI-specific workforce transition, retraining, or income support. The transition is happening — the preparation is not. See Issue #13 — Labor.
Government AI with no guardrails: Federal and state governments are deploying AI in high-stakes decision-making with minimal oversight. AI tools assist in determining benefits eligibility, criminal sentencing recommendations, predictive policing, and immigration enforcement. An Idaho Medicaid algorithm cut home care hours for disabled residents by up to 42% before being struck down in court. No federal law requires transparency, human review, or due process protections for government AI decisions. See Issue #12 — Criminal Justice for AI in sentencing and policing.
Section 03

How We Got Here

The US regulatory vacuum is not an accident — it is the product of deliberate lobbying, a bipartisan fear of “stifling innovation,” and the structural inability of Congress to legislate on technology at the speed technology moves.

1996
Section 230 and the “Hands-Off” Precedent
Section 230 of the Communications Decency Act established the principle that internet platforms are not liable for user-generated content. While reasonable in 1996 for message boards, the principle metastasized into a bipartisan consensus that technology companies should be left alone. This “don’t regulate the internet” instinct became the template for AI: let the companies build it, and we’ll figure out the rules later. Later never came.
2010s
The Rise of Algorithmic Decision-Making
Machine learning systems moved from research labs to production deployment across hiring, lending, criminal justice, and content moderation. Companies deployed AI systems at scale without any obligation to test for bias, disclose their use, or provide human review. ProPublica’s 2016 investigation of the COMPAS algorithm revealed racial disparities in criminal risk scoring. Amazon quietly abandoned its AI recruiting tool after discovering it systematically penalized women. Each scandal produced headlines, not legislation.
2017–2022
Big Tech Lobbying and Self-Regulation Theater
The five largest technology companies spent over $60 million annually on federal lobbying. Industry groups published “AI ethics principles” and formed self-regulatory bodies — none with enforcement authority. The strategy was deliberate: create the appearance of responsible governance while ensuring no binding legislation passed. Companies published responsible AI reports while deploying systems that discriminated along predictable demographic lines.
2022–2024
The Generative AI Explosion
ChatGPT launched in November 2022 and reached 100 million users in two months — the fastest adoption of any technology in history. Generative AI capabilities expanded from text to images, video, voice, and code. Deepfakes became trivially easy to produce. The EU accelerated the AI Act. The UK created the AI Safety Institute. Canada proposed criminal penalties. The US held Senate hearings, produced executive orders with no enforcement mechanism, and passed nothing.
2024–Present
The Concentration Accelerates
Frontier AI model training costs escalated to hundreds of millions of dollars, concentrating development in a handful of companies with access to capital and compute. Six companies now control over 90% of frontier AI development. The same companies own the cloud infrastructure, build the models, and operate the consumer applications. Vertical integration is nearly complete. Without intervention, AI governance defaults to whatever these companies decide is profitable.
Section 04

What Other Countries Do

The EU passed the world’s first comprehensive AI law. Canada proposed criminal penalties for reckless AI. Brazil blocked Meta from training AI on user data. The United States — home to every major frontier AI company — has passed nothing. The country building the technology is the only one not governing it.

Country / Framework Approach Key Provisions Result vs. U.S.
European UnionEU AI Act (2024) Risk-tiered comprehensive regulation Four risk tiers with mandatory conformity assessments for high-risk AI. Real-time biometric surveillance banned. GPAI model obligations for frontier systems. Transparency requirements for all AI-generated content. First comprehensive AI law in the world. Effective August 2024. The US has nothing comparable.
United KingdomAI Safety Institute Pro-innovation, sector-specific regulation AI Safety Institute (AISI) conducts frontier model testing. Sector-specific regulation through existing regulators. Bletchley Declaration signatory. Safety evaluation before deployment. Sector-led approach with dedicated safety testing infrastructure the US lacks.
CanadaAIDA (Bill C-27) Risk-based with criminal penalties Artificial Intelligence and Data Act imposes criminal penalties for reckless AI deployment causing serious harm. Risk-based obligations. Mandatory impact assessments for high-impact systems. Criminal penalties for reckless AI — treating AI harm as seriously as other forms of corporate recklessness.
BrazilANPD enforcement Active enforcement before legislation finalized National Data Protection Authority blocked Meta from training AI on user data. Stopped Worldcoin biometric data collection. AI regulatory framework (PL 2338/2023) advancing. Active enforcement protecting citizens now, not waiting for perfect legislation.
The common lesson: Every major democracy except the United States has either enacted comprehensive AI regulation, created dedicated safety testing infrastructure, or actively enforced existing law against AI harms. The US approach — voluntary industry self-regulation and non-binding executive orders — has produced exactly what voluntary self-regulation always produces: the companies do what is profitable and discard what is not. The question is not whether regulation will “stifle innovation” — it is whether the world’s most powerful technology will be governed by democracy or by six corporate boardrooms.
Section 05

Our Policy — Seven Pillars

Seven pillars targeting seven distinct failures of AI governance, each grounded in evidence and modeled on international best practice. Together they constitute a unified framework for ensuring that the most powerful technology in human history serves democratic society rather than replacing it.

Pillar 01 American AI Accountability Act

Establish a comprehensive federal AI regulatory framework modeled on the EU AI Act’s risk-tiered approach. The United States cannot remain the only major democracy with no binding AI regulation while hosting every major frontier AI company on Earth.

  • Four-tier risk framework: Banned uses (social scoring, real-time mass biometric surveillance, subliminal manipulation) are prohibited outright. High-risk applications (hiring, lending, housing, insurance, criminal justice, healthcare) require mandatory pre-deployment bias audits, human review, and transparency. General-purpose AI models above computational thresholds must provide model cards, training data disclosure, and safety testing. Research and open-source development face minimal regulation to preserve innovation.
  • Federal AI Regulatory Authority: Create a dedicated federal agency with the technical expertise, enforcement authority, and budget to regulate AI — not a toothless advisory committee. Modeled on the structure that gave the FAA authority over aviation safety and the FDA authority over pharmaceutical safety.
  • Public incident database: Establish an AI harm incident database modeled on the FAA’s Aviation Safety Reporting System. Mandatory reporting for high-risk AI failures. Public access to anonymized data. The aviation industry’s safety culture was built on transparent incident reporting — AI needs the same.
  • Registration and conformity assessment: All high-risk AI systems must be registered with the federal regulator before deployment. Conformity assessments by accredited third parties. Post-market surveillance with mandatory adverse event reporting.
Cross-reference: Issue #21 (Internet & Privacy) for the broader algorithmic accountability framework
Pillar 02 Mandatory Algorithmic Bias Audits

Any AI system used in employment decisions, tenant screening, credit scoring, insurance underwriting, or criminal justice must undergo independent third-party bias audits. Amazon’s hiring AI, the COMPAS recidivism algorithm, and Facebook’s housing ad system all demonstrated that unaudited AI entrenches discrimination along predictable demographic lines.

  • Annual independent third-party bias audits for all high-risk AI systems. Results published publicly. Disparate impact standards apply. Companies cannot grade their own homework.
  • Private right of action: Individuals harmed by biased AI decisions can sue. The enforcement mechanism cannot depend solely on an overwhelmed FTC — affected individuals need standing to hold companies accountable directly.
  • Joint FTC-EEOC rulemaking establishing audit standards, reporting requirements, and enforcement procedures for AI in hiring, lending, housing, and insurance.
  • Explainability requirements: When AI makes a consequential decision about an individual — denying a job, a loan, housing, or parole — the affected person has the right to a plain-language explanation of how the decision was made and a meaningful human appeal process.
Evidence: COMPAS false positive rate — 45% for Black defendants vs. 23% for white defendants (ProPublica). Amazon hiring AI systematically penalized women’s résumés (Reuters).
Pillar 03 Deepfake and Election Integrity Protections

AI-generated deepfakes pose an existential threat to election integrity. The 2024 New Hampshire primary robocall proved the threat is not hypothetical. Deepfake technology can produce indistinguishable video of any public figure saying anything. Without federal law, each election cycle will be more compromised than the last.

  • Federal ban on AI-generated impersonations of candidates within 90 days of any election. Criminal penalties for deepfake election interference. Platform liability for failure to remove labeled deepfakes after notice.
  • Mandatory disclosure labels on all AI-generated media. Machine-readable provenance labels embedded in AI-generated content at the point of creation. Platforms must label AI-generated content in feeds.
  • Platform accountability: Social media platforms must disclose when AI systems are used for content recommendation and moderation. Algorithmic amplification of AI-generated election content within 90 days of an election triggers liability.
  • Election infrastructure protection: Dedicated federal resources for protecting election infrastructure from AI-enabled cyberattacks. Mandatory security standards for AI systems used in election administration.
Cross-reference: Issue #30 (Media & Press Freedom) for the broader disinformation framework and platform accountability provisions
Pillar 04 Data Rights in AI Training

AI companies have scraped the entire public internet — text, images, voice, video, creative works — to train models worth billions of dollars. The creators and data subjects whose work built these models received nothing. This is the largest uncompensated appropriation of human creative output in history.

  • Affirmative opt-in consent: Your data cannot train AI models without your explicit, informed consent. Not buried in terms of service. Not assumed from public posting. Affirmative opt-in, revocable at any time.
  • Compensation framework: When personal data, creative works, or likeness are used in commercial AI training, a compensation framework ensures that creators and data subjects share in the value their contributions generate.
  • Right to know: Every person has the right to know whether their data was used in AI training datasets. Companies must maintain auditable records of training data provenance.
  • Right to opt out retroactively: If your data was used in training without consent, you have the right to demand its removal from future model iterations. Applies to text, images, voice, likeness, and creative works.
Cross-reference: Issue #21 (Internet & Privacy) for the comprehensive data rights framework and digital consent architecture
Pillar 05 AI Worker Transition and Retraining

The World Economic Forum projects 85 million jobs displaced by AI by 2025 and 97 million new roles created. The new roles require different skills, exist in different industries, and are located in different geographies than the jobs being eliminated. Without a federal transition program, the displacement falls entirely on workers who had no role in the decisions that automated their jobs.

  • Federal AI Workforce Transition Fund: $20 billion over 10 years. Portable retraining accounts that follow workers across jobs and industries. Community college AI skills programs. Employer tax credits for retraining existing workers rather than replacing them.
  • Advanced notice requirements: Employers deploying AI that eliminates positions must provide 90-day advance notice. Transition assistance is required, not optional. Workers learn about AI displacement from their employer, not from a locked door.
  • Sector-specific transition plans for industries facing the highest automation exposure: customer service, data entry, transportation, retail, and administrative support. Each plan developed with worker input and union participation.
  • Community impact assessments: When AI deployment will displace a significant portion of a community’s workforce, the deploying company must fund community transition support — modeled on the community impact provisions in plant closure law.
Cross-reference: Issue #13 (Labor) for the broader worker protection framework and Issue #43 (Automation & Future of Work)
Pillar 06 Big Tech AI Antitrust

Six companies control over 90% of frontier AI development. The same companies own the cloud infrastructure AI runs on, build the models, and operate the consumer applications. Vertical integration is nearly complete. Without structural intervention, the AI economy defaults to monopoly.

  • Structural separation: Companies that build frontier AI models AND control the cloud infrastructure AND operate the application layer face mandatory structural separation. You cannot be the railroad, the freight company, and the department store simultaneously.
  • Interoperability mandates: AI APIs must meet interoperability standards. Proprietary lock-in — where switching AI providers requires rebuilding entire systems — is the digital equivalent of the railroad monopoly tactic of incompatible track gauges.
  • Open-model requirements: All AI research produced with federal funding must be published as open-weight models. Publicly funded science produces public goods, not proprietary advantages.
  • National AI Research Resource (NAIRR): Fully funded at $500M+/year. Public universities guaranteed access to the compute resources that frontier AI development requires. Open-source AI development protected by statute — corporate lobbying cannot ban the open research that drives scientific progress.
Cross-reference: Issue #20 (Corporate Power) for the broader antitrust framework and market concentration enforcement provisions
Pillar 07 AI in Government Decision-Making

Federal and state governments are deploying AI in benefits determinations, sentencing recommendations, predictive policing, surveillance, and immigration enforcement. An Idaho Medicaid algorithm cut home care hours for disabled residents by up to 42% before being struck down in court. Due process applies to government decisions — including government decisions made by algorithms.

  • Mandatory disclosure: Every federal agency must publicly disclose all AI systems used in decision-making that affects individual rights. No secret algorithms determining who gets benefits, who gets surveilled, or who gets deported.
  • Human review requirement: Any AI-assisted government decision affecting individual rights — benefits eligibility, sentencing, immigration, law enforcement targeting — must include mandatory human review with the authority to override the algorithm.
  • Algorithmic impact assessments: Before any government agency deploys an AI system, it must complete a public algorithmic impact assessment analyzing potential disparate impacts, error rates, and civil liberties implications.
  • Annual public audits: All government AI systems subject to annual independent audits with published results. An independent AI oversight office within the federal government audits AI use across all agencies.
  • Federal moratorium on AI social scoring: No US government agency may deploy social scoring or mass behavioral surveillance AI. This is not China. Government does not rate citizens.
Cross-reference: Issue #12 (Criminal Justice) for AI in sentencing and policing; Issue #37 (Disability Rights) for AI in benefits determinations
Section 06

How We Pay For It

AI regulation is not primarily a spending problem — it is a governance problem. The major costs are the AI Workforce Transition Fund and the NAIRR. Both are investments with measurable returns, funded through targeted revenue sources that ensure the companies profiting from AI bear the costs of governing it.

Investment Mechanism Cost / Revenue Return
AI Workforce Transition Fund Portable retraining accounts, community college programs, employer tax credits for retraining vs. replacement, sector-specific transition plans $20B over 10 years ($2B/yr) 97M new roles accessible to displaced workers; reduced unemployment insurance costs; maintained consumer spending and tax revenue from retrained workers
National AI Research Resource (NAIRR) Public compute access for universities and researchers; open-weight model development; competition preservation against six-company monopoly $500M+/year Preserved competition and open-source innovation; public university research capacity; reduced long-term monopoly costs to consumers and taxpayers
Federal AI Regulatory Authority Dedicated agency with technical expertise, enforcement authority, registration and audit infrastructure $300M–$500M/year Registration fees and penalty revenue partially self-funding; avoided costs of unregulated AI harms (discrimination lawsuits, election interference, privacy violations)
AI Company Assessment Annual assessment on companies deploying high-risk AI systems, scaled to revenue. Companies profiting from AI fund the cost of governing it — the same principle as FDA user fees for pharmaceutical companies. $1B–$3B/year from frontier AI companies Fully funds regulatory infrastructure; creates incentive for responsible deployment to reduce assessment rates
Avoided Costs Algorithmic discrimination lawsuits, election interference remediation, worker displacement emergency spending, privacy violation penalties Multi-billion annually in avoided crisis costs Proactive regulation is cheaper than reactive crisis response — the FAA model proves this for aviation safety
Net fiscal picture: The total annual cost of the AI governance framework is approximately $3–4 billion per year. The global AI market is projected at $1.8 trillion by 2030. The six largest AI companies have a combined market capitalization exceeding $10 trillion. The cost of governing AI is a rounding error relative to the profits being generated. The cost of not governing it — in discrimination, election manipulation, worker displacement, and monopoly extraction — is incalculable.
Section 07

Implementation Timeline

Implementation is phased to establish the regulatory framework immediately, begin enforcement in Year 2, and achieve full operational capacity by Year 4. Executive actions on day one signal that the regulatory vacuum is over.

Phase 1 — Foundation
Year 1 — Months 1–12
  • Executive order establishing the four-tier AI risk framework
  • Federal moratorium on AI social scoring systems
  • All federal AI systems disclosed publicly
  • Deepfake election provisions take effect
  • FTC and EEOC begin joint bias audit rulemaking
  • NAIRR funding authorized at $500M+/year
  • AI Workforce Transition Fund established
Milestone: Risk framework defined; federal AI systems disclosed; deepfake election protections active; NAIRR launched; Transition Fund accepting applications
Phase 2 — Enforcement
Year 2 — Months 13–24
  • Federal AI Regulatory Authority operational
  • High-risk AI registration system live
  • First mandatory bias audits completed for hiring and lending AI
  • AI company assessment collection begins
  • Data consent and opt-out provisions enforceable
  • Government AI algorithmic impact assessments underway
  • First sector-specific worker transition plans published
Milestone: First enforcement actions for non-compliant high-risk AI; bias audit results published; registration database operational; 100,000+ workers in transition programs
Phase 3 — Scale
Years 3–4 — Months 25–48
  • Full conformity assessment regime for high-risk AI
  • Antitrust structural separation proceedings initiated
  • NAIRR at full capacity with university access guaranteed
  • Comprehensive data rights enforcement operational
  • Government AI audit results published annually
  • Interoperability standards finalized and enforceable
  • AI Workforce Transition Fund at full annual deployment
Milestone: High-risk AI fully regulated; structural separation underway; open-source AI preserved; 500,000+ workers retrained; government AI audited and transparent
Phase 4 — Evaluation
Year 5+ — Month 49+
  • Independent evaluation of all seven pillars
  • Bias audit effectiveness review — disparate impact trends
  • Worker transition outcomes assessment
  • Market concentration review — has structural separation worked?
  • International regulatory harmonization assessment
  • Framework updated for next-generation AI capabilities
Milestone: Measurable reduction in algorithmic discrimination; AI market competition preserved; worker displacement managed; democratic oversight operational; framework adapting to technological change
Section 08

Addressing Counterarguments

“Regulation will stifle American AI innovation and hand leadership to China.”
The EU passed the world’s first comprehensive AI law in August 2024. European AI research has not collapsed. European AI companies have not relocated. The UK created the AI Safety Institute — UK AI investment has increased, not decreased. Aviation regulation did not destroy the American aviation industry — it made it the safest in the world. Pharmaceutical regulation did not end drug development — it ensured drugs work and are safe. The “regulate and they’ll leave” argument is the same argument every industry has made about every regulation since the Clean Air Act. It has never been true. Moreover, the NAIRR and open-source AI preservation provisions in this framework actively support innovation — the regulation targets harms, not research.
“AI bias is a technical problem, not a regulatory one — companies will fix it themselves.”
Amazon’s hiring AI was deployed for years before Reuters discovered it penalized women. The COMPAS algorithm was used in criminal sentencing across the country before ProPublica demonstrated its racial disparities. Facebook’s housing ad system discriminated for years before HUD intervened. Companies had every opportunity to “fix it themselves.” They did not — because bias testing costs money, and no one was required to spend it. Voluntary self-regulation has produced the current results: documented discrimination with no accountability. Mandatory audits work precisely because they are mandatory.
“Data consent requirements will make AI development impossible.”
The GDPR imposed data consent requirements across Europe in 2018. AI development in Europe did not stop. Companies adapted. The argument that “we need to use your data without permission or we can’t build AI” is functionally identical to arguing “we need to dump waste in rivers or we can’t manufacture goods.” The externality is real, and so is the solution: internalize the cost. If a company’s business model cannot survive requiring consent to use people’s data, that business model deserves to fail.
“The $20 billion worker transition fund is too expensive.”
The six largest AI companies have a combined market capitalization exceeding $10 trillion. The AI market is projected at $1.8 trillion by 2030. The worker transition fund costs $2 billion per year. That is 0.11% of the projected market value the technology will generate. Meanwhile, 85 million jobs are projected for displacement. The cost of not funding transition — mass unemployment, lost consumer spending, social instability, emergency welfare spending — dwarfs $2 billion annually. The companies capturing trillions in value can fund the transition for the workers whose jobs they automate.
“AI antitrust will break up the companies driving American competitiveness.”
The antitrust provisions require structural separation for companies that simultaneously control infrastructure, models, and applications — not the breakup of every AI company. Standard Oil’s breakup produced more value for shareholders and more innovation for the economy than the monopoly ever did. AT&T’s breakup created the modern telecommunications industry. Competition drives innovation; monopoly suppresses it. When six companies control 90%+ of a technology that will reshape every sector of the economy, the competition concern is not theoretical — it is urgent.
“Government should not regulate what it doesn’t understand.”
The FDA regulates pharmaceuticals. Congress members are not biochemists. The FAA regulates aviation. Senators do not fly planes. The SEC regulates financial markets. Legislators are not quants. Every major regulatory body employs domain experts to develop and enforce technical standards. The Federal AI Regulatory Authority would do the same. The alternative — “let the companies regulate themselves because only they understand the technology” — is not a regulatory philosophy. It is an abdication. Sixty-three percent of Americans say AI needs more regulation. Democracy means the public gets a say.
Section 09

Citations

“The question is not whether AI will be regulated. The question is whether it will be regulated by democratic governments accountable to citizens — or by corporate boardrooms accountable to shareholders. We choose democracy.”
— The Common Good Party

Sources & References

  1. Bloomberg Intelligence — AI Market to Hit $1.3T by 2032: bloomberg.com — AI Market Forecast
  2. World Economic Forum — Future of Jobs Report 2023: weforum.org — Future of Jobs Report
  3. Pew Research Center — Public Views on AI (August 2023): pewresearch.org — AI Public Concern
  4. EU AI Act — Official Text (Regulation 2024/1689): eur-lex.europa.eu — EU AI Act
  5. ProPublica — Machine Bias / COMPAS Investigation: propublica.org — Machine Bias
  6. FCC — AI-Generated Voice Calls Ruling (2024): fcc.gov — AI Robocall Ruling
  7. NIST — AI Risk Management Framework: nist.gov — AI Risk Management
  8. Stanford HAI — AI Index Report 2024: aiindex.stanford.edu — AI Index Report
  9. Reuters — Amazon Scraps Secret AI Recruiting Tool (2018): reuters.com — Amazon AI Recruiting
  10. ACLU — Government AI Deployment Reports: aclu.org — Privacy & Technology
  11. UK AI Safety Institute: gov.uk — AI Safety Institute
  12. Canada Bill C-27 (AIDA): parl.ca — Artificial Intelligence and Data Act
Paid for by The Common Good Party (thecommongoodparty.com) and not authorized by any candidate or candidate's committee.