google.com, pub-5167539840471953, DIRECT, f08c47fec0942fa0

Can Internet Censorship Counter AI-Generated Misinformation? Exploring the Implications

In a recent Senate hearing on generative AI, Democrat senators have advocated for increased control and censorship of the internet to counter the dissemination of what they perceive as AI-generated “misinformation.” This move has sparked concerns about potential limitations on freedom of expression and the negative implications it may have for the future of the internet.

Controlling Generative AI to Tackle Misinformation

During the hearing titled “Oversight of A.I.: Rules for Artificial Intelligence,” several senators interrogated key witnesses, including OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and NYU Professor Emeritus Gary Marcus. The focus of the inquiry was how generative AI could be censored or restricted to prevent the creation of content labeled as misinformation.

Senator Amy Klobuchar (D-MN) expressed her concerns, stating, “We just can’t let people make stuff up and then not have any consequence.” She pointed to an incident involving OpenAI’s generative AI chatbot, ChatGPT, which allegedly produced false information when asked to generate a tweet about a polling location in Bloomington, Minnesota. Klobuchar used this example to highlight the potential dangers of misinformation during elections.

In response, Altman assured Klobuchar that OpenAI is taking measures to censor ChatGPT outputs by refusing to generate certain content and monitoring user activity to identify excessive content generation. He acknowledged the importance of addressing the impact of AI-generated content on elections and advocated for collaboration between industry and government to tackle the issue effectively.

Ironically, Klobuchar referenced past incidents of alleged Russian interference and emphasized the need for stricter regulation, drawing a parallel between the potential dangers of AI-generated misinformation and the purported interference in the 2016 presidential election. However, investigations have yielded limited evidence supporting claims of mass Russian interference. Both Twitter officials and Special Counsel John Durham’s report found little substantiation for these allegations.

Expanding Control and Restriction

Senator Mazie Hirono (D-HI) expressed concerns about harmful AI-generated images that went viral, particularly those depicting the arrest of former President Donald Trump. She questioned whether OpenAI’s classification of such images as harmful aligns with her own perception and called for guardrails to protect the country from potentially harmful content.

Senator Richard Blumenthal (D-CT) supported the idea of banning generative AI outputs related to elections and sought input from the witnesses regarding other types of content that should be banned or restricted. Montgomery highlighted the significance of addressing “misinformation” as a crucial area of concern. Marcus emphasized the need for tight regulation, particularly in combating “medical misinformation.”

Senator Josh Hawley (R-MO) proposed the creation of a federal right of action that would enable individuals harmed by AI-generated “medical misinformation” or “election misinformation” to sue generative AI companies. However, Marcus cautioned against this suggestion, warning that it may primarily benefit lawyers rather than effectively address the issues at hand.

Regulatory Approaches and Safety Measures

Senator Chris Coon (D-DE) discussed the challenge of disinformation influencing elections, deeming it entirely predictable. He expressed frustration with Congress’s inability to effectively regulate social media disinformation and inquired whether the European Union’s AI Act, which regulates AI based on risk and bans certain uses, could serve as a model. Montgomery supported the EU’s approach, emphasizing the need for comprehensive guardrails.

Coon further highlighted the substantial risk posed by AI’s ability to deliver wildly incorrect information and stressed the importance of regulating AI. Altman assured him that OpenAI invested significant time in establishing safety standards for GPT-4, the latest version of their language model, to minimize potential harms.

Senator Lindsey Graham (R-SC) questioned the effectiveness of Section 230, which grants online platforms immunity from civil liability for user-generated content if they moderate it in “good faith.” Graham suggested that Section 230 was a mistake and called for increased liability when platforms allow slander or fail to enforce their terms of service against actions like bullying. All witnesses agreed that liability should exist when harm occurs, although they recognized the distinction between reproducing and generating content. Altman proposed a fresh approach rather than relying on Section 230.

Unaddressed Concerns and Potential Consequences

While the senators expressed their eagerness to enforce content censorship on AI companies to combat misinformation, they failed to address concerns about potential over-censorship. The application of vague and subjective terms in social media censorship has already led to the mass censorship of truthful content, such as the Hunter Biden laptop story and discussions regarding the effectiveness of the Covid vaccine in preventing transmission.

Senator Hirono’s comments also indicated a willingness to extend sweeping censorship rules to AI-generated memes or similar content not intended to be taken seriously. While some individuals may have mistaken the AI-generated Trump arrest images as real, many recognized them as a playful experiment with AI, particularly as the anticipation of Trump’s arrest grew.

Moreover, major AI companies are already implementing censorship measures. The CEO of Midjourney, the generative AI tool responsible for creating the viral Trump arrest images, admitted to blocking various prompts, including those attempting to generate images featuring Chinese President Xi Jinping. OpenAI has also committed to implementing safeguards against misinformation.

Alongside censorship, lawmakers are proposing additional restrictions on this potentially transformative technology. Many senators advocate for granting government-approved licenses exclusively to companies offering generative AI tools.

Free Speech and Alternative Media are under attack by the Deep State. Real News Cast needs reader support to survive and thrive. 

Please do not give your hard-earned money to sites or channels that copy/paste our intellectual property. We spend countless hours vetting, researching, and writing. Thank you. Every dollar helps. Contributions help keep the site active and help support the author (and his medical bills)

Contribute to Real News Cast via  GoGetFunding

 

 

 

Scroll to Top