Category: AI risk news

  • DeepSeek: A New Challenger in the AI Race

    DeepSeek: A New Challenger in the AI Race

    Deepseek impact on the AI market

    The global AI landscape is undergoing a significant shift with the emergence of DeepSeek, a Chinese company that has released powerful open-source AI models. DeepSeek’s models, DeepSeek-V3 and DeepSeek-R1, rival the performance of leading American models like OpenAI’s ChatGPT and Anthropic’s Claude, but at a fraction of the cost. This has allowed developers and users worldwide to access cutting-edge AI technology, posing a potential threat to U.S. leadership in the field. https://www.foreignaffairs.com/china/real-threat-chinese-ai

    The Open-Source Advantage

    DeepSeek’s models are open-source, meaning anyone can download, modify, and build upon them. This stands in contrast to the predominantly proprietary models offered by American AI companies. Open-source software has historically fostered innovation and collaboration, allowing for rapid development and enhanced security. DeepSeek’s open-source approach has enabled it to achieve remarkable performance despite facing export controls that limit its access to advanced chips.  

    The Chinese Influence

    While DeepSeek’s open-source models offer significant advantages, they also raise concerns about potential Chinese government influence. Beijing has implemented regulations requiring Chinese-made LLMs to align with the “core values of socialism” and avoid disseminating sensitive information. This censorship is evident in DeepSeek’s models, which avoid or provide skewed answers on topics deemed sensitive by the Chinese government.  

    The Chip Advantage

    The rise of DeepSeek also has implications for the AI chip market. Currently, the U.S. dominates the AI chip market, with Nvidia’s GPUs powering the majority of AI workloads. However, DeepSeek’s ability to run its models on less advanced hardware, such as Huawei’s Ascend chips, could shift the market towards Chinese-made chips. This could potentially erode the West’s chip advantage and give China greater control over the AI supply chain.  

    A Call for Action

    The emergence of DeepSeek highlights the need for the United States to reassess its AI strategy. While continuing to invest in frontier AI systems, the U.S. must also prioritize the development and support of open-source AI models. This includes increasing funding for open-source AI initiatives, creating incentives for companies to release open-source models, and fostering a robust open-source AI ecosystem.  

    The AI race is not just about technological advancement; it’s also about shaping the future of the global AI landscape. The United States must act decisively to ensure that it remains a leader in the AI race and that the development and deployment of AI technologies align with democratic values.  

  • The Osama bin Laden Scenario: Could AI Be the Next Weapon of Mass Destruction?

    The Osama bin Laden Scenario: Could AI Be the Next Weapon of Mass Destruction?

    Addressing ai threats

    In the wake of the recent AI summit in Paris, former Google CEO Eric Schmidt expressed a chilling concern: could AI become the next weapon of mass destruction? https://www.politico.eu/article/ex-google-boss-eric-schmidt-fears-ai-risks-could-lead-to-a-bin-laden-scenario/

    Schmidt cautioned that AI falling into the wrong hands could pose extreme risks, evoking a potential “Osama bin Laden scenario.” In this scenario, terrorists or rogue states could use AI to cause harm to innocent people.  

    This is not mere science fiction. The International Scientific Report on the Safety of Advanced AI, published in January 2025, details the many ways AI could be misused. The report highlights AI’s potential for malicious use, including cyberattacks, the development of chemical and biological weapons, and the spread of disinformation.  

    The concern over AI misuse is not limited to a few experts. In a recent survey, 90% of respondents expressed concern about the potential for AI to be used for harmful purposes. This public anxiety is not unfounded. AI is a dual-use technology, meaning it can be used for both beneficial and harmful purposes.  

    The potential for AI to be used as a weapon of mass destruction is a grave concern. AI could be used to design and deploy cyberattacks that could cripple critical infrastructure, such as power grids and financial systems. AI could also be used to develop and deploy chemical and biological weapons that could cause mass casualties.  

    The threat of AI terrorism is real and growing. As AI becomes more sophisticated and accessible, the risk of it being used for malicious purposes will only increase. It is crucial that we take steps now to mitigate these risks and ensure that AI is used for good, not for harm.  

    What Can Be Done?

    There is no single solution to the threat of AI terrorism. However, there are several steps that can be taken to mitigate the risks.

    • Increase government oversight: Governments need to increase their oversight of AI development and use. This includes regulating the development and deployment of AI systems, as well as monitoring the use of AI by both state and non-state actors.  
    • Invest in AI safety research: More research is needed to understand the risks of AI and how to mitigate them. This includes research on AI safety, AI ethics, and AI governance.  
    • Promote international cooperation: International cooperation is essential to address the global threat of AI terrorism. This includes sharing information about AI risks and coordinating efforts to mitigate them.  
    • Educate the public: The public needs to be educated about the risks of AI and how to protect themselves from AI-enabled attacks. This includes educating people about AI safety, AI ethics, and AI governance.  

    The threat of AI terrorism is real and growing. However, by taking steps now to mitigate the risks, we can ensure that AI is used for good, not for harm.

  • The International AI Safety Report: A Wake-Up Call for Risk Professionals

    The International AI Safety Report: A Wake-Up Call for Risk Professionals

    International AI safety report

    The latest International AI Safety Report is a must-read for anyone working in risk management. https://www.gov.uk/government/publications/international-ai-safety-report-

    This landmark report, compiled by 96 AI experts worldwide, offers a stark assessment of the evolving AI landscape and its potential impact on society.

    Here are the key takeaways that risk professionals need to know:

    AI is evolving at an unprecedented pace. Just a few years ago, AI models struggled to generate coherent text. Today, they can write complex computer programs, create photorealistic images, and even engage in nuanced conversations. This rapid progress shows no signs of slowing down, with experts predicting further significant advancements in AI capabilities in the coming years.  

    This rapid evolution brings new and complex risks. The report highlights several existing and emerging risks associated with AI, including:

    • Malicious use: AI can be used to create harmful deepfakes, manipulate public opinion, launch sophisticated cyberattacks, and even facilitate the development of bioweapons.  Malicious Use of AI
    • Malfunctions: AI systems can be unreliable, biased, and prone to errors, potentially leading to harmful consequences in critical domains like healthcare and finance.  AI system malfunctions
    • Systemic Risks: The widespread adoption of AI could lead to significant labor market disruptions, exacerbate global inequalities, and erode privacy.  systemic risks to AI

    Risk management is struggling to keep up. The report emphasizes that current risk management techniques are often insufficient to address the complex and evolving risks posed by AI. This “evidence dilemma” requires policymakers and risk professionals to make difficult decisions with limited information and often under pressure.  

    The need for a proactive and comprehensive approach. The report calls for a more proactive and comprehensive approach to AI risk management, involving:

    • Increased investment in AI safety research.  
    • Greater collaboration between AI developers, policymakers, and civil society.  
    • The development of robust risk assessment frameworks and mitigation strategies.  

    The International AI Safety Report serves as a wake-up call for risk professionals. It’s a reminder that the AI revolution is not just about technological advancement, but also about managing the risks that come with it. By understanding the key takeaways of this report and embracing a proactive and comprehensive approach to risk management, we can help ensure that AI benefits society while minimizing its potential harms.