Top of the Day
Does China’s DeepSeek Mean U.S. AI Is Sunk?
(Sam Bresnick – Newsweek/Center for Security and Emerging Technology – 29 January 2025) Last week, the Chinese startup DeepSeek sent shockwaves through the global technology community when it unveiled a powerful new open-source AI system. The model, known as R1, reportedly matched the performance of leading models from U.S. companies like OpenAI, even though it was built at a fraction of the cost. In the days since its release, R1 has shaken global financial markets and left AI experts scrambling to understand how a relatively unknown Chinese startup could achieve such a breakthrough. – https://www.newsweek.com/does-chinas-deepseek-mean-us-ai-sunk-opinion-2022892
Italian regulator asks DeepSeek for information about data collection
(Suzanne Smalley – The Record – 29 January 2025) Italy’s data privacy regulator announced Tuesday that it has asked the Chinese artificial intelligence company DeepSeek to provide information about its data practices. – https://therecord.media/italian-regulator-deepseek-info-collection
DeepSeek’s AI bombshell sends Meta into ‘war room’ mode to fight low-cost threat
(Sujita Sinha – Interesting Engineering – 29 January 2025) Meta is in crisis mode after DeepSeek, a Chinese AI startup, launched a game-changing AI model. Reports indicate that Meta assembled four “war rooms” to investigate how the new model, backed by High-Flyer Capital Management, developed its R1 chatbot. DeepSeek claims that R1 performs on par with models like ChatGPT while operating at a fraction of the cost. This development has put Meta’s AI team on high alert, as it raises concerns about the massive investments American companies are making in AI. Mathew Oldham, Meta’s AI infrastructure director, has reportedly told colleagues that the Chinese startup’s new model could surpass Meta’s next-generation AI, Llama 4, which is set for release in early 2025. The Information reported that two Meta employees confirmed the company’s urgent response to the unexpected competition. – https://interestingengineering.com/culture/meta-war-rooms-analyze-deepseek
Chinese GenAI Startup DeepSeek Sparks Global Privacy Debate
(Kevin Poireault – Infosecurity Magazine – 29 January 2025) The year-old Chinese startup DeepSeek took the world by storm when it launched R1, its new large language model (LLM), but experts are now rising about the risks it poses. DeepSeek’s breakthrough is that this reasoning model, an AI trained with reinforcement learning to perform complex reasoning, was likely developed without access to the latest Nvidia AI chips due to export sanctions. – https://www.infosecurity-magazine.com/news/deepseek-global-privacy-debate/
DeepSeek is a modern Sputnik moment for West
(Justin Bassi, David Wroe – ASPI The Strategist) The release of China’s latest DeepSeek artificial intelligence model is a strategic and geopolitical shock as much as it is a shock to stockmarkets around the world. This is a field into which US investors have been pumping hundreds of billions of dollars, and which many commentators predicted would be led by Silicon Valley for the foreseeable future. – https://www.aspistrategist.org.au/deepseek-is-a-modern-sputnik-moment-for-west/
What DeepSeek’s breakthrough says (and doesn’t say) about the ‘AI race’ with China
(Kenton Thibaut – Atlantic Council – 28 January 2025) This week, tech and foreign policy spaces are atwitter with the news that a China-based open-source reasoning large language model (LLM), DeepSeek-R1, was found to match the performance of OpenAI’s o1 model across a number of core tasks. It has reportedly done so for a fraction of the cost, and you can access it for free. – https://www.atlanticcouncil.org/blogs/new-atlanticist/what-deepseeks-breakthrough-says-and-doesnt-say-about-the-ai-race-with-china/
Governance and Legislation
The Trouble With AI Safety Treaties
(Keegan McBride, Adam Thierer – Lawfare – 29 January 2025) Rapid advances in artificial intelligence (AI) “foundation models” capabilities have brought conversations about AI to the forefront of global discourse. With a few exceptions, innovation in AI has been driven by, or is dependent on, American industry, science, infrastructure, and capital. Palantir’s CEO recently characterized this dominance in remarks at the Reagan National Defense Forum: “America is in the very beginning of a revolution that we own. The AI revolution. We own it. It should basically be called the U.S. AI revolution.”. However, many foreign governments—including those not implementing or building AI—are asking for a seat at the table to decide how this emerging powerful technology should be governed. While these conversations are likely to continue, policymakers must remain cognizant of the current advantage that the U.S. maintains in the development of AI. Technological dominance is a key component of America’s national security: Limiting the country’s ability to innovate and lead in AI would have serious implications for the country’s long-term interests. – https://www.lawfaremedia.org/article/the-trouble-with-ai-safety-treaties
Industry groups call on Congress to enact federal data privacy law
(Suzanne Smalley – The Record – 29 January 2025) More than three dozen industry groups are asking Congressional leaders to pass federal data privacy legislation that will override a patchwork of disparate state privacy laws. In a letter sent Tuesday, the groups pushed for a national standard that will be easier for businesses to adhere to. – https://therecord.media/industry-groups-congress-data-privacy
To regulate cyber behaviour, listen to Indo-Pacific voices
(Gatra Priyandita, Louise Marie Hurel – ASPI The Strategist – 29 January 2025) The international community must broaden its understanding of responsible cyber behaviour by incorporating diverse perspectives from the Indo-Pacific, a region critical to the future of global cyber governance. As the mandate of the United Nations Open-Ended Working Group on the security and use of information and communications technologies ends in July 2025, the world must reflect on what it means to be a responsible state actor in cyberspace. Over two decades, the UN has developed a framework of responsible state behaviour in cyberspace, which includes the acceptance that international law applies to state conduct in cyberspace and a commitment to observe a set of norms. – https://www.aspistrategist.org.au/to-regulate-cyber-behaviour-listen-to-indo-pacific-voices/
Trump’s Commerce pick backs light-touch regulation in emerging tech policy
(Alexandra Kelley – NextGov – 29 January 2025) Howard Lutnick, President Donald Trump’s nominee to lead the Department of Commerce, advocated for a competitive posture in U.S. tech policy during his confirmation hearing on Wednesday, particularly to maintain leadership in artificial intelligence innovation. Touching on a range of tech policy issues, including chip manufacturing and intellectual property protection, lawmakers on the Senate Commerce, Science and Transportation Committee gauged how Lutnick would lead economic growth if confirmed, with an overarching thread of prioritizing technological innovation over excessive regulations. – https://www.nextgov.com/emerging-tech/2025/01/trumps-commerce-pick-backs-light-touch-regulation-emerging-tech-policy/402592/?oref=ng-home-top-story
Song-Chun Zhu: The Race to General Purpose Artificial Intelligence is not Merely About Technological Competition; Even More So, it is a Struggle to Control the Narrative
(Center for Security and Emerging Technology – 29 January 2025) China’s artificial intelligence expert Song-Chun Zhu argued on 11 January 2025 at the 26th Peking University Guanghua New Year Forum. that China’s artificial intelligence industry should chart a different course than the current US focus on big data-intensive language and processing models. He argues that China should simultaneously explore multiple paths towards general-purpose artificial intelligence, such as modelling human cognition, algorithm innovation and ‘small data’. – https://cset.georgetown.edu/publication/song-chun-zhu-ai-narrative-control/
Security
South Africa’s government-run weather service knocked offline by cyberattack
(Jonathan Greig – The Record – 29 January 2025) A cyberattack has forced the government-run South African Weather Service (SAWS) offline, limiting access to a critical service used by the country’s airlines, farmers and allies. The website for SAWS has been down since Sunday evening, according to a statement posted to social media. SAWS has had to use Facebook, X and other sites to share daily information on thunderstorms, wildfires and other weather events. – https://therecord.media/south-african-weather-service-cyberattack
Maryland healthcare network forced to shut down IT systems after ransomware attack
(Jonathan Greig – The Record – 29 January 2025) A ransomware attack on a large healthcare network in Maryland has forced officials to shut off IT systems and cancel some appointments. Frederick Health Medical Group warned on Monday that there will be delays in service as it contends with the cyberattack. – https://therecord.media/maryland-healthcare-ransomware-frederick-health
Report: Almost half of state consumer privacy laws fail to protect individuals’ data
(Suzanne Smalley – The Record – 29 January 2025) Nearly half of state consumer privacy laws fail to adequately protect individuals’ data and have made consumer protections weaker than they were before the laws were passed, according to a report released Tuesday. Of 19 states with data privacy laws, eight failed an assessment conducted by two leading advocacy groups, the Electronic Privacy Information Center (EPIC) and U.S. PIRG Education Fund. – https://therecord.media/state-consumer-privacy-laws-failing-to-protect-data
AI Surge Drives Record 1205% Increase in API Vulnerabilities
(Alessandro Mascellino – Infosecurity Magazine – 29 January 2025) AI-driven API vulnerabilities have skyrocketed by 1205% in the past year. The figures come from the 2025 API ThreatStats Report by Wallarm, which highlights how AI has become the biggest driver of API security threats, with nearly 99% of AI-related vulnerabilities tied to API flaws. The study also found that 57% of AI-powered APIs were accessible externally, while 89% lacked secure authentication. Only 11% implemented robust security measures. – https://www.infosecurity-magazine.com/news/ai-surge-record-1205-increase-api/
Nation-State Hackers Abuse Gemini AI Tool
(James Coker – Infosecurity Magazine – 29 January 2025) Nation-state threat actors are frequently abusing Google’s generative AI tool Gemini to support their malicious cyber operations. An analysis by the Google Threat Intelligence Group (GTIG) highlighted that APT groups from Iran, China, Russia and North Korea are using the large language model (LLM) for a wide range of malicious activity. – https://www.infosecurity-magazine.com/news/nation-state-abuse-gemini-ai/
New Hellcat Ransomware Gang Employs Humiliation Tactics
(James Coker – Infosecurity Magazine – 29 January 2025) The recently emerged HellCat ransomware gang is using psychological tactics to court public attention and pressure victims to pay extortion demands. This is according to an analysis of the ransomware-as-a-service (RaaS) group by Cato Networks, published on January 28. – https://www.infosecurity-magazine.com/news/hellcat-ransomware-humiliation/
Threat Actors Exploit Government Websites for Phishing
(Alessandro Mascellino – Infosecurity Magazine – 29 January 2025) Cybercriminals have been increasingly exploiting government website vulnerabilities to conduct phishing campaigns. New research by Cofense Intelligence, analyzing data from November 2022 to November 2024, showed how malicious actors abuse .gov top-level domains (TLDs) across multiple countries. According to the new data, threat actors often leveraged legitimate domains to host credential phishing pages, serve as command-and-control (C2) servers or redirect victims to malicious sites. While .gov domains were abused less frequently than others, they remained a target due to users’ inherent trust in government websites. – https://www.infosecurity-magazine.com/news/threat-actors-exploit-gov-websites/
Breakout Time Accelerates 22% as Cyber-Attacks Speed Up
(Phil Muncaster – Infosecurity Magazine – 29 January 2025) Threat actors exploited new vulnerabilities and moved from initial access to lateral movement much faster in 2024, challenging network defenders to accelerate incident response, according to ReliaQuest. The security operations (SecOps) specialist analyzed customer data and compared its findings with external industry reporting, to better understand attack trends over the past year. – https://www.infosecurity-magazine.com/news/breakout-time-accelerates-22/
Scores of Critical UK Government IT Systems Have Major Security Holes
(Phil Muncaster – Infosecurity Magazine – 29 January 2025) The UK government’s spending watchdog has raised grave concerns about the cyber resilience of critical IT systems across departments, highlighting major gaps in system controls and visibility. The warnings came from the National Audit Office (NAO) in its Government cyber resilience report published today. – https://www.infosecurity-magazine.com/news/scores-critical-government-it/
How Organizations Can Mitigate Privacy Risks From GenAI
(Kevin Poireault – Infosecurity Magazine – 29 January 2025) The emergence of powerful language models and their companion chatbot tools like OpenAI’s ChatGPT and Anthropic’s Claude has revolutionized human-computer interaction, but their rapid development has also raised critical concerns about data privacy. As these models become increasingly sophisticated, the potential for misuse of personal information grows exponentially. – https://www.infosecurity-magazine.com/news-features/how-mitigate-privacy-risk-genai/
Defense, Intelligence, and Warfare
Through a Glass, Darkly: Transparency and Military AI Systems
(Branka Marija – Centre for International Governance Innovation – 29 January 2025) The need for transparency is often emphasized in international governance discussions on military artificial intelligence (AI) systems; however, transparency is a complex and multi-faceted concept, understood in various ways within international debates and literature on the responsible use of AI. It encompasses dimensions such as explainability, interpretability, understandability, predictability and reliability. The degree to which these aspects are reflected in state approaches to ensuring transparent and accountable systems remains unclear and requires further investigation. A paper examines the feasibility of achieving transparency in military AI systems, considers the associated challenges and proposes pathways to develop effective transparency mechanisms. Transparency efforts are one critical part of the broader governance and regulatory framework that needs to be developed for military applications of AI and autonomy. – https://www.cigionline.org/publications/through-a-glass-darkly-transparency-and-military-ai-systems/
CCA Signals a New Era in AI-Driven Air Combat
(Gregory C. Allen, Isaac Goldston – Center for Strategic & International Studies – 28 January 2025) The Collaborative Combat Aircraft (CCA) program represents an enormous increase in DOD investment in AI-enabled and autonomous fighter aircraft and demonstrates the key role AI could play in the future of U.S. airpower. One of the biggest decisions the new DOD leadership will need to make is whether to go ahead with a manned Next Generation Air Dominance (NGAD) fighter. – https://www.csis.org/analysis/cca-signals-new-era-ai-driven-air-combat