Pledging
These are the original issues in this subcategory
- AUTOMATION COMPENSATION
- AUTONOMOUS VEHICLES
- ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) is the ability of a computer to perform tasks commonly associated with intelligent beings, such as the ability to reason, discover meaning, generalize, and learn from past experience. AI is now used in limited applications such as medical diagnosis, computer search engines, and facial, voice, and handwriting recognition. Supporters claim AI has the potential to transform every sector of our economy and society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are currently unanswerable. However, many people are worried about the future of AI - including those involved with creating that very future. Surveys show that more 57% of respondents rate the societal risks of AI as high, compared with 25% who say the benefits of AI are high. One recent criticism of AI has largely focused on chatGPT, which has been widely attacked for being inaccurate, biased, and almost human: “the bot can become aggressive, condescending, threatening, committed to political goals, clingy, creepy, and a liar.”
By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Some of the biggest risks today include consumer privacy, biased programming, danger to humans, job displacement, and unclear legal regulation. Some estimates suggest 300 million full-time jobs could be affected globally by AI automation globally by 2040. With the acceptance of autonomous robots and generative AI, artificial intelligence will eventually transform virtually every existing industry. Cyberattacks that employ AI techniques have become more prevalent. Cybercriminals can misuse these attacks to gain ill intent, utilizing AI-enhanced tools like deepfake videos, chatbots, and fake audio to deceive and manipulate individuals or systems.
Critics say top AI labs acknowledge that extensive research has shown AI systems with human-competitive intelligence can pose profound risks to society and humanity. They say advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. They also worry governments around the world will use AI to develop weapons before anything else, and claim AI could become self-aware one day and have feelings and emotions that mimic those of humans. It is estimated that AI’s point of singularity –the hypothetical future of machines with the cognitive capacity equal to humans– will occur as soon as 2030. After that point, machine intelligence will exceed that of humans.
Prominent AI pioneer Geoffrey Hinton has been particularly vocal about the need for advanced AI systems to be programmed not to harm humans. Hinton is widely known as the "Godfather of AI" for his foundational work on neural networks. Since leaving his position at Google in 2023, he has publicly spoken out about the potential risks of AI. Hinton and other experts have raised concerns that AI could be used maliciously to create harmful tools like lethal autonomous weapons or biological agents. Hinton has stressed the need for urgent research into AI safety to understand how to control systems that surpass human intelligence, noting that profit motives may not sufficiently drive large companies to prioritize safety. He has suggested programming advanced AI with "maternal instincts" to ensure the protection of humans and has supported government regulation, such as a California AI safety bill, arguing it is necessary to encourage tech companies to invest more in safety research. These concerns relate to the broader "AI alignment problem," which is the challenge of ensuring that AI goals align with human values.
Pending Resolution: H.R.4223 - National AI Commission Act
Sponsor: Rep. Ted Lieu (CA)
Status: House Committee on Science, Space, and Technology
Chair: Rep. Brian Babin (TX)
By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Some of the biggest risks today include consumer privacy, biased programming, danger to humans, job displacement, and unclear legal regulation. Some estimates suggest 300 million full-time jobs could be affected globally by AI automation globally by 2040. With the acceptance of autonomous robots and generative AI, artificial intelligence will eventually transform virtually every existing industry. Cyberattacks that employ AI techniques have become more prevalent. Cybercriminals can misuse these attacks to gain ill intent, utilizing AI-enhanced tools like deepfake videos, chatbots, and fake audio to deceive and manipulate individuals or systems.
Critics say top AI labs acknowledge that extensive research has shown AI systems with human-competitive intelligence can pose profound risks to society and humanity. They say advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. They also worry governments around the world will use AI to develop weapons before anything else, and claim AI could become self-aware one day and have feelings and emotions that mimic those of humans. It is estimated that AI’s point of singularity –the hypothetical future of machines with the cognitive capacity equal to humans– will occur as soon as 2030. After that point, machine intelligence will exceed that of humans.
Prominent AI pioneer Geoffrey Hinton has been particularly vocal about the need for advanced AI systems to be programmed not to harm humans. Hinton is widely known as the "Godfather of AI" for his foundational work on neural networks. Since leaving his position at Google in 2023, he has publicly spoken out about the potential risks of AI. Hinton and other experts have raised concerns that AI could be used maliciously to create harmful tools like lethal autonomous weapons or biological agents. Hinton has stressed the need for urgent research into AI safety to understand how to control systems that surpass human intelligence, noting that profit motives may not sufficiently drive large companies to prioritize safety. He has suggested programming advanced AI with "maternal instincts" to ensure the protection of humans and has supported government regulation, such as a California AI safety bill, arguing it is necessary to encourage tech companies to invest more in safety research. These concerns relate to the broader "AI alignment problem," which is the challenge of ensuring that AI goals align with human values.
Pending Resolution: H.R.4223 - National AI Commission Act
Sponsor: Rep. Ted Lieu (CA)
Status: House Committee on Science, Space, and Technology
Chair: Rep. Brian Babin (TX)
- I oppose reforming current artificial intelligence policy and wish to donate resources to the campaign committee of Speaker Mike Johnson (LA).
- I Support establishing a bipartisan commission to recommend a comprehensive regulatory framework for artificial intelligence (AI) by: 1.) Establishing a National AI Commission to create a new, independent commission in the legislative branch. This body would be bipartisan, with an equal number of Republican and Democratic members appointed by Congress and the President. 2.) Forming a regulatory framework to research and develop a comprehensive, binding, risk-based regulatory framework for AI. This approach would regulate AI applications differently based on their potential risk. 3.) Balancing risk mitigation and innovation to both mitigate the risks of AI and support U.S. innovation and opportunity in the field. 4.) Drawing on diverse expertise so that the 20 members of the commission would be selected from a variety of fields, including computer science, civil society, industry, and government (including national security). 5.) Requiring the commission to produce a series of reports for Congress and the President including an interim report within six months, a final report six months later with recommendations for a regulatory framework, and a follow-up report a year after that. 6.) The bill is not intended to prevent Congress from passing other AI-related legislation in the meantime, but rather to create a foundation of expert analysis for future, more substantial action. And wish to donate resources to the campaign committee of Rep. Brian Babin (TX) and/or to an advocate group currently working with this issue.
- I Support establishing a bipartisan commission to recommend a comprehensive regulatory framework for artificial intelligence (AI) by:
1.) Establishing a National AI Commission to create a new, independent commission in the legislative branch. This body would be bipartisan, with an equal number of Republican and Democratic members appointed by Congress and the President.
2.) Forming a regulatory framework to research and develop a comprehensive, binding, risk-based regulatory framework for AI. This approach would regulate AI applications differently based on their potential risk.
3.) Balancing risk mitigation and innovation to both mitigate the risks of AI and support U.S. innovation and opportunity in the field.
4.) Drawing on diverse expertise so that the 20 members of the commission would be selected from a variety of fields, including computer science, civil society, industry, and government (including national security).
5.) Requiring the commission to produce a series of reports for Congress and the President including an interim report within six months, a final report six months later with recommendations for a regulatory framework, and a follow-up report a year after that.
6.) The bill is not intended to prevent Congress from passing other AI-related legislation in the meantime, but rather to create a foundation of expert analysis for future, more substantial action.
And wish to donate resources to the campaign committee of Rep. Brian Babin (TX) and/or to an advocate group currently working with this issue.
You May Pledge Your Support For This Issue With A Monetary
Donation And By Writing A Letter To Your Representatives
Donation And By Writing A Letter To Your Representatives
Please login to pledge
Pledge Period - Opening Date
February 2, 2026 @00:01 Universal Coordinated Time (UTC)
Pledge Period - Closing Date
February 8, 2026 @23:59 Universal Coordinated Time (UTC)
Trustee Election - Begins
February 9, 2026