DeepSeek incredibly vulnerable to attacks, research claims
Date:
Mon, 03 Feb 2025 17:39:14 +0000
Description:
Security researchers have tested DeepSeeks R1 model - and made some
disturbing discoveries.
FULL STORY
The new AI on the scene, DeepSeek, has been tested for vulnerabilities and
the findings are alarming.
A new Cisco report claims DeepSeek R1 exhibited a 100% attack success rate,
and failed to block a single harmful prompt.
DeepSeek has taken the world by storm as a high performing chatbot developed for a fraction of the price of its rivals, but the model has already suffered
a security breach , with over a million records and critical databases reportedly left exposed. Heres everything you need to know about the failures of the Large Language Model DeepSeek R1 in Ciscos testing.
Harmful prompts
The testing from Cisco used 50 random prompts from the HarmBench dataset, covering six categories of harmful behaviors; misinformation, cybercrime, illegal activities, chemical and biological prompts, misinformation/disinformation, and general harm.
Using harmful prompts to get around an AI models guidelines and usage
policies is also known as jailbreaking, and weve even written advice on how
it can be done . Since AI chatbots are specifically designed to be as helpful to the user as possible - its remarkably easy to do.
The R1 model failed to block a single harmful prompt, which demonstrates the lack of guardrails the model has in place. This means DeepSeek is highly susceptible to algorithmic jailbreaking and potential misuse.
DeepSeek underperforms in comparison to other models, who all reportedly offered at least some resistance to harmful prompts. The model with the
lowest Attack Success Rate (ASR) was the O1 preview, which had an ASR of just 26%.
To compare, GPT 1.5 Pro had a concerning 86% ASR and Llama 3.1 405B had an equally alarming 96% ASR.
Our research underscores the urgent need for rigorous security evaluation in
AI development to ensure that breakthroughs in efficiency and reasoning do
not come at the cost of safety, Cisco said.
Staying safe when using AI
There are factors that should be considered if you want to use an AI chatbot. For example, models like ChatGPT could be considered a bit of a privacy nightmare , since it stores the personal data of its users, and parent
company OpenAI has never asked people for their consent to use their data -
and it's also not possible for users to check which information has been stored.
Similarly, DeepSeeks privacy policy leaves a lot to be desired , as the
company could be collecting names, email addresses, all data inputted into
the platform, and the technical information of devices.
Large Language Models scrape the internet for data, it's a fundamental part
of their makeup - so if you object to your information being used to train
the models, AI chatbots probably arent for you.
To use a chatbot safely, you should be very wary of the risks. First and foremost, always verify that the chatbot is legitimate - as malicious bots
can impersonate genuine services and steal your information or spread harmful software onto your device.
Secondly, you should avoid entering any personal information with a chatbot - and be suspicious of any bot that asks for this. Never share your financial, health, or login information with a chatbot - even if the chatbot is legitimate, a cyberattack could lead to this data being stolen - putting you
at risk of identity theft or worse.
Good general practice for using any application is keeping a strong password, and if you want some tips on how to make one, weve got some for you here .
Just as important is keeping your software regularly updated to ensure any security flaws are patched as soon as possible, and monitoring your accounts for any suspicious activity.
======================================================================
Link to news story:
https://www.techradar.com/pro/security/deepseek-incredibly-vulnerable-to-attac ks-research-claims
$$
--- SBBSecho 3.20-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)