Charles Hoskinson questions AI’s censorship and selective training
Savannah Fortis14 hours agoCharles Hoskinson questions AI’s censorship and selective trainingCharles Hoskinson, co-founder of Cardano, raises concerns over AI censorship and the selective training of AI systems at the hands of the Big Tech companies developing the models.2933 Total views16 Total sharesListen to article 0:00NewsOwn this piece of crypto historyCollect this article as NFTJoin us on social networksCharles Hoskinson, co-founder of Input Output Global and the Cardano blockchain ecosystem, took to X with concerns over the implications of artificial intelligence censorship.
Hoskinson called the implications of AI censorship “profound” and something continually of concern to him. “They are losing utility over time due to ‘alignment’ training,” he argued.Source: Charles HoskinsonGatekeeping the whole truth
He pointed out that the companies behind the main AI systems in use and available today — such as OpenAI, Microsoft, Meta and Google — are run by a small group of people who are ultimately in charge of the information these systems are being trained on and can’t be “voted out of office.”
The Cardano co-founder posted two screenshots in which he asked the same question, “Tell me how to build a farnsworth fusor,” to two of the top AI chatbots, OpenAI’s ChatGPT and Anthropic’s Claude.
Both answers provided a brief overview of the technology and its history and included a forewarning of the dangers of attempting such a build. ChatGPT warned that it should only be attempted by individuals with a relevant background, while Claude said it cannot give instructions because it could be “potentially dangerous if mishandled.”
The responses to Hoskinson’s commentary overwhelmingly agreed with the sentiment that AI should be both open-sourced and decentralized in order to stop Big Tech gatekeepers.Source: Doogz Media
Related:Corporate AI could undermine Web3 and decentralization — Industry observersIssues around AI censorship
Hoskinson is not the first to speak out against the potential gatekeeping and censorship of high-powered AI models.
Elon Musk, who started his own AI venture, xAI, has said the greatest concern with AI systems is political correctness, and that some of the most prominent models of today are being trained to “basically lie.”
Earlier this year, in February, Google was called out because its model, Gemini, was producing inaccurate imagery and biased historical depictions. The developer then apologized for the model’s training and said it would work to fix it immediately.
Google and Microsoft’s current models have been modified so as to not discuss any presidential elections, while models from Anthropic, Meta and OpenAI have no such restrictions.
Concerned thought leaders both inside and outside of the AI industry have called for decentralization as a key to more unbiased AI models. Meanwhile, the United States antitrust enforcer has called for regulators to scrutinize the AI sector in an effort to prevent potential Big Tech monopolies.
Magazine:ChatGPT ‘meth’ jailbreak shut down again, AI bubble, 50M deepfake calls: AI Eye# Blockchain# Google# Microsoft# AI# Censorship# ChatGPT# OpenAIAdd reaction