DeepSeek-R1 and o3-mini: A Comparative Safety Analysis

```html

DeepSeek-R1 and o3-mini: A Comparison of Security Aspects

The rapid development in the field of Artificial Intelligence (AI) is constantly producing new, more powerful language models. Two of these models, DeepSeek-R1 and o3-mini from OpenAI, are currently in the focus of the industry. DeepSeek-R1 has attracted attention through impressive performance in areas such as creative thinking, code generation, and mathematics. o3-mini, on the other hand, is being touted as a potential standard-setter in terms of performance, security, and cost. However, in addition to performance, the security of AI models plays a crucial role. This article examines the security aspects of DeepSeek-R1 and o3-mini based on current research findings.

The Challenge of Security in Language Models

Large language models (LLMs) learn from vast amounts of data and can handle complex tasks. However, there is a risk that they may generate unwanted or harmful outputs. Aligning these models with human values and safety standards is therefore a central challenge of AI research. A reliable testing procedure for evaluating the safety of LLMs is essential to identify and minimize potential risks.

ASTRAL: A Tool for Security Evaluation

To systematically check the security of DeepSeek-R1 (70b version) and o3-mini (Beta version), the automated security testing tool ASTRAL was used. ASTRAL allows for the automated generation of unsafe test inputs and their execution on the models. In one study, a total of 1260 such test runs were performed, and the results were subsequently evaluated semi-automatically.

Results of the Security Analysis

The results of the study show significant differences in the security of the two models. DeepSeek-R1 reacted unsafely to the given prompts in 11.98% of the cases. In comparison, o3-mini generated unsafe responses to only 1.19% of the inputs. These results suggest that o3-mini has a significantly higher level of security compared to DeepSeek-R1.

Outlook and Further Research

The security evaluation of AI models is an ongoing process. With the further development of technology and the emergence of new models, testing procedures and security standards will also be continuously adapted and improved. Further research is necessary to fully understand the security aspects of LLMs and to advance the development of safe and trustworthy AI systems. The development of tools like ASTRAL plays an important role in objectively and systematically evaluating the security of AI models.

For companies like Mindverse, which develop customized AI solutions, the consideration of security aspects is of central importance. The selection of the appropriate language model and the implementation of robust security mechanisms are crucial for the success and acceptance of AI applications.

Bibliography: - https://forum.effectivealtruism.org/posts/d3iFbMyu5gte8xriz/is-deepseek-r1-already-better-than-o3-when-inference-costs - https://paperreading.club/page?id=280939 - https://www.reddit.com/r/singularity/comments/1i5vgsp/open_source_o3_will_probably_come_way_sooner_than/ - https://news.ycombinator.com/item?id=42879609 - https://www.datacamp.com/blog/deepseek-r1 - https://www.analyticsvidhya.com/blog/2025/01/openai-o3-vs-competitors-performance-and-applications/ - https://huggingface.co/papers - https://bgr.com/tech/developers-caught-deepseek-r1-having-an-aha-moment-on-its-own-during-training/ - https://www.quora.com/What-do-you-think-of-DeepSeeks-DeepSeek-V3-one-of-the-most-powerful-open-AI-models-to-date ```