Weekly Digest on AI and Emerging Technologies (3 February 2025)

HIGHLIGHTS FROM DAILY DIGEST – WEEK 27 TO 31 JANUARY 2025

 

 

Terrorism and Counter Terrorism

 

Non-binding guiding principles on preventing, detecting and disrupting the use of new and emerging financial technologies for terrorist purposes released in official UN languages

(UN CTED – 29 January 2025) On 6 January 2025, the United Nations Security Council Counter-Terrorism Committee adopted the “Non-binding guiding principles on preventing, detecting and disrupting the use of new and emerging financial technologies for terrorist purposes,” to be known and referred to as the “Algeria Guiding Principles.” The “Algeria Guiding Principles” were prepared with the support of the Counter-Terrorism Committee Executive Directorate (CTED) in accordance with the “Delhi Declaration” on countering the use of new and emerging technologies for terrorist purposes in a manner consistent with international law. As recognized by the Security Council in its resolution 2462 (2019), innovations in financial technologies may offer significant economic opportunities, but may also present a risk of being misused, including for terrorist purposes. The growing scale of such misuse has been highlighted in several reports of the United Nations, the Financial Action Task Force (FATF) and FATF-style regional bodies, as well as by members of CTED’s Global Research Network and private sector partners. The scale and types of abuses vary considerably depending on regional and economic context, available means, and the targets set by terrorists in terms of their financing sources and methods. – https://www.un.org/securitycouncil/ctc/news/non-binding-guiding-principles-preventing-detecting-and-disrupting-use-new-and-emerging

 

Governance and Legislation

Throwing Caution to the Wind: Unpacking the U.K. AI Opportunities Action Plan

(Elke Schwarz – Just Security – 30 January 2025) On Jan. 13,  the U.K. government announced its long-awaited “AI Opportunities Action Plan,” which promises to ramp up “AI adoption across the U.K. to boost economic growth, provide jobs for the future and improve people’s everyday lives.” The plan, drafted by venture capital entrepreneur Matt Clifford, features 50 recommendations which outline what might be required for the United Kingdom to supercharge its AI capabilities. This includes building out an expanded data and computing infrastructure, capturing and nurturing AI talent, “unlocking data assets” for both private and public sectors, and removing AI adoption barriers. It also includes the recommendation to create a “U.K. Sovereign AI” unit to serve as a lynchpin for public-private partnerships and as an instrument to support new and existing private sector frontier AI companies with direct investments and other measures that would help such startups and their CEOs thrive in the U.K. The plan recommends that this new government unit “package[e] and provid[e] responsible access to the most valuable UK owned data sets and relevant research” and “facilitate[e] d eep collaborations with the national security community.” The unit must be able to “remove barriers and make deals” and receive sufficient funding to “act quickly and decisively” in a fast-moving environment. – https://www.justsecurity.org/106837/uk-ai-opportunities-action-plan-little-room-responsible-governance/

 

The Trouble With AI Safety Treaties

(Keegan McBride, Adam Thierer – Lawfare – 29 January 2025) Rapid advances in artificial intelligence (AI) “foundation models” capabilities have brought conversations about AI to the forefront of global discourse. With a few exceptions, innovation in AI has been driven by, or is dependent on, American industry, science, infrastructure, and capital. Palantir’s CEO recently characterized this dominance in remarks at the Reagan National Defense Forum: “America is in the very beginning of a revolution that we own. The AI revolution. We own it. It should basically be called the U.S. AI revolution.”. However, many foreign governments—including those not implementing or building AI—are asking for a seat at the table to decide how this emerging powerful technology should be governed. While these conversations are likely to continue, policymakers must remain cognizant of the current advantage that the U.S. maintains in the development of AI. Technological dominance is a key component of America’s national security: Limiting the country’s ability to innovate and lead in AI would have serious implications for the country’s long-term interests. – https://www.lawfaremedia.org/article/the-trouble-with-ai-safety-treaties

Song-Chun Zhu: The Race to General Purpose Artificial Intelligence is not Merely About Technological Competition; Even More So, it is a Struggle to Control the Narrative

(Center for Security and Emerging Technology – 29 January 2025) China’s artificial intelligence expert Song-Chun Zhu argued on 11 January 2025 at the 26th Peking University Guanghua New Year Forum. that China’s artificial intelligence industry should chart a different course than the current US focus on big data-intensive language and processing models. He argues that China should simultaneously explore multiple paths towards general-purpose artificial intelligence, such as modelling human cognition, algorithm innovation and ‘small data’. – https://cset.georgetown.edu/publication/song-chun-zhu-ai-narrative-control/

 

Chinese GenAI Startup DeepSeek Sparks Global Privacy Debate

(Kevin Poireault – Infosecurity Magazine – 29 January 2025) The year-old Chinese startup DeepSeek took the world by storm when it launched R1, its new large language model (LLM), but experts are now rising about the risks it poses. DeepSeek’s breakthrough is that this reasoning model, an AI trained with reinforcement learning to perform complex reasoning, was likely developed without access to the latest Nvidia AI chips due to export sanctions. – https://www.infosecurity-magazine.com/news/deepseek-global-privacy-debate/

 

Why nobody can see inside AI’s black box

(Abi Olvera – Bulletin of the Atomic Scientists – 27 January 2025) When you click a button in Microsoft Word, you likely know the exact outcome. That’s because each user action leads to a predetermined result through a path that developers carefully mapped out, line by line, in the program’s source code. The same goes for many often-used computing applications available up until recently. But artificial intelligence systems, particularly large language models that power the likes of ChatGPT and Claude, were built and thus operate in a fundamentally different way. Developers didn’t meticulously program these new systems in a step-by-step fashion. The models shaped themselves through complex learning processes, training on vast amounts of data to recognize patterns and generate responses. – https://thebulletin.org/2025/01/why-nobody-can-see-inside-ais-black-box/#post-heading

 

How might standard contract terms help unlock responsible AI data sharing?

 

(Lee Tiedrich, Elena Simperl, Gefion Thuermer, Thomas Carey-Wilson – OECD.AI – 27 January 2025) As artificial intelligence (AI) technology advances, reshaping industries and society, the urgency to develop robust frameworks for responsibly sharing data for AI use cases becomes increasingly clear. High-quality datasets fuel the AI systems that have made breakthroughs possible in healthcare, environmental protection, social welfare, and in other realms over the past years. Despite these potential benefits, significant barriers remain. For instance, data must be made available and shared ethically while navigating a complex array of legal requirements. Additional challenges—such as the infrastructure needed to store and process vast amounts of data and the substantial energy demands associated with it—lie beyond the scope of this discussion. Nonetheless, a confluence of ethical considerations and legal obligations can frequently compound with these other issues to slow progress. – https://oecd.ai/en/wonk/standard-contract-terms-responsible-ai-data-sharing

Report: agencies’ adoption of GenAI depends on safe and ethical principles

(Edward Graham – NextGov – 27 January 2025) Government agencies need to prioritize the responsible adoption of emerging capabilities like generative artificial intelligence as they pursue their technology modernization efforts, according to a report released last month by the IBM Center for The Business of Government. The analysis — which included interviews with officials in the U.S., Canada and Australia — outlined a framework for how government leaders can implement new capabilities while working to overcome challenges with harmonizing and replacing legacy systems. – https://www.nextgov.com/artificial-intelligence/2025/01/report-agencies-adoption-genai-depends-safety-standard-compliance/402526/?oref=ng-homepage-river

 

The Need for Tech Regulation Beyond U.S.-China Rivalry

(Kenton Thibaut – Lawfare – 24 January 2025) U.S. policymaking circles increasingly frame the topic of technology regulation in terms of a “race” with China for global supremacy, with critical national security implications. Tech executives have warned that regulatory and antitrust measures targeting Big Tech would impede artificial intelligence (AI) companies from out-innovating China, thus undermining U.S. national security; the Trump administration just scrapped the Biden administration’s 2023 AI executive order in large part due to this same argument. And officials in both the Trump and Biden administrations and Congress have drafted and passed measures to ensure that U.S. technology is developed and governed with national security objectives front and center. These objectives have been served in the form of export controls, investment restrictions, and efforts to ban Chinese technologies from the U.S. market. – https://www.lawfaremedia.org/article/the-need-for-tech-regulation-beyond-u.s.-china-rivalry

Fighting deepfakes: what’s next after legislation?

(Fitriani – ASPI The Strategist – 24 January 2025) Deepfake technology is weaponising artificial intelligence in a way that disproportionately targets women, especially those working public roles, compromising their dignity, safety, and ability to participate in public life. This digital abuse requires urgent global action, as it not only infringes on human rights but also affects their democratic participation. – https://www.aspistrategist.org.au/fighting-deepfakes-whats-next-after-legislation/

Global education must integrate AI, centred on humanity

(UN News – 24 January 2025) Marking the International Day of Education, UN Secretary-General António Guterres has emphasized learning as a basic human right and foundation for individual and societal growth. His message highlighted the dual nature of technological advances such as Artificial Intelligence, which offer immense potential – but also pose considerable risks. “Education is an essential building block for every person to reach their full potential, and for societies and economies to grow and flourish”, Mr. Guterres said. – https://news.un.org/en/story/2025/01/1159381

Politicization of intel oversight board could threaten key US-EU data transfer agreement

 

(Suzanne Smalley – The Record – 24 January 2025) The Trump administration’s decision to order the resignations of all Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB) could jeopardize a transatlantic data privacy agreement designed to protect the flow of commercial data between Europe and the U.S., potentially complicating the way American companies do business in Europe. PCLOB plays a central role in a data agreement struck between the U.S. and the European Union in 2023 that allows data to flow freely between the two, despite differences in policy approaches. The EU has relied in large part on the PCLOB to ensure that the U.S. adequately protects personal data, and to address complaints from Europeans about any misuse of their data. – https://therecord.media/politicization-of-pclob-could-threaten-key-eu-us-data-transfer-agreement

 

If reliable AI content detection tools become available, should we use them? If so, how?

 

(Alistair Knott, Dino Pedreschi, Susan Leavy – OECD.AI – 24 January 2025) The world is being inundated with AI-generated content. A recent investigation by Wired magazine found 7% of global news stories were AI-generated, rising to 47% for Medium posts, and even 78% for specific topics. This content is generated through interactions with AI systems such as chatbots and image generators. Once produced and published, this content can take on a life of its own. It can be posted on discussion boards and social media platforms, added to websites such as Wikipedia and Stack Exchange, disseminated in newspapers and academic journals, and aired on TV or radio. From there, it can be shared and reshared without referencing its AI origins. – https://oecd.ai/en/wonk/ai-content-detection-tools

Next-Gen Industrial Infrastructure

 

(Amani Abou-Zeid, Christophe De Vusser, Bandar Alkhorayef, Niclas Mårtensson, Huang Shan – WEF – 23 January 2025) The intelligent infrastructure market is expected to grow to $2 trillion over the next 10 years, bolstered by increasing private sector investment. Technologies such as AI, quantum and hyperconnectivity are blending the physical and the digital in an integrated approach to infrastructure, promising to transform industrial operations. How are smart infrastructure investments both meeting the increasing demand for infrastructure in industry and contributing to responsible and sustainable growth? – https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2025/sessions/next-gen-industrial-infrastructure/

Frontiers

DeepSeek challenges the supremacy of US tech companies in AI

(Jenny Wong-Leung, Stephan Robin – ASPI The Strategist – 30 January 2024) It shouldn’t have come as a complete shock. US tech stocks, especially chipmaker Nvidia, plunged on Monday after news that the small China-based company DeepSeek had achieved a dramatic and reportedly inexpensive advance in artificial intelligence. But the step forward for China’s AI industry was in fact foreseeable. It was foreseeable from ASPI’s Critical Technology Tracker, which was launched in early 2023 and which in its latest update monitors high-impact research (measured as the 10 percent most highly cited publications) over two decades across 64 technologies, including machine learning and natural language processing (NLP). – https://www.aspistrategist.org.au/deepseek-challenges-the-supremacy-of-us-tech-companies-in-ai/

 

Does China’s DeepSeek Mean U.S. AI Is Sunk?

(Sam Bresnick – Newsweek/Center for Security and Emerging Technology – 29 January 2025) Last week, the Chinese startup DeepSeek sent shockwaves through the global technology community when it unveiled a powerful new open-source AI system. The model, known as R1, reportedly matched the performance of leading models from U.S. companies like OpenAI, even though it was built at a fraction of the cost. In the days since its release, R1 has shaken global financial markets and left AI experts scrambling to understand how a relatively unknown Chinese startup could achieve such a breakthrough. – https://www.newsweek.com/does-chinas-deepseek-mean-us-ai-sunk-opinion-2022892

DeepSeek is a modern Sputnik moment for West

(Justin Bassi, David Wroe – ASPI The Strategist) The release of China’s latest DeepSeek artificial intelligence model is a strategic and geopolitical shock as much as it is a shock to stockmarkets around the world. This is a field into which US investors have been pumping hundreds of billions of dollars, and which many commentators predicted would be led by Silicon Valley for the foreseeable future. – https://www.aspistrategist.org.au/deepseek-is-a-modern-sputnik-moment-for-west/

DeepSeek’s AI bombshell sends Meta into ‘war room’ mode to fight low-cost threat

(Sujita Sinha – Interesting Engineering – 29 January 2025) Meta is in crisis mode after DeepSeek, a Chinese AI startup, launched a game-changing AI model. Reports indicate that Meta assembled four “war rooms” to investigate how the new model, backed by High-Flyer Capital Management, developed its R1 chatbot. DeepSeek claims that R1 performs on par with models like ChatGPT while operating at a fraction of the cost. This development has put Meta’s AI team on high alert, as it raises concerns about the massive investments American companies are making in AI. Mathew Oldham, Meta’s AI infrastructure director, has reportedly told colleagues that the Chinese startup’s new model could surpass Meta’s next-generation AI, Llama 4, which is set for release in early 2025. The Information reported that two Meta employees confirmed the company’s urgent response to the unexpected competition. – https://interestingengineering.com/culture/meta-war-rooms-analyze-deepseek

What DeepSeek r1 Means—and What It Doesn’t

 

(Dean W. Ball – Lawfare – 28 January 2025) On Jan. 20, the Chinese AI company DeepSeek released a language model called r1, and the AI community (as measured by X, at least) has talked about little else since. The model is the first to publicly match the performance of OpenAI’s frontier “reasoning” model, o1—beating frontier labs Anthropic, Google’s DeepMind, and Meta to the punch. The model matches, or comes close to matching, o1 on benchmarks like GPQA (graduate-level science and math questions), AIME (an advanced math competition), and Codeforces (a coding competition). – https://www.lawfaremedia.org/article/what-deepseek-r1-means-and-what-it-doesn-t

 

What Is Sci-Fi, What Is High-Tech?

 

(Raquel Urtasun, Tom Oxley, Nita Farahany, Anthony Jules, Yossi Vardi – WEF – 24 January 2025) Neurotechnology extends the possibilities of our brains, autonomous systems take us where we need to go and robots are becoming a part of our daily life. These technologies are not just the backdrop of futuristic novels, they are creating a world previously confined to the imaginations of science-fiction writers. What are the key future technologies that once seemed unbelievable and how are they poised to reshape everyday life in 2035? – https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2025/sessions/what-is-sci-fi-what-is-high-tech/

Why we will be seeing a radical reinvention of supply chains

 

(Karmesh Vaswani – WEF – 24 January 2025) Artificial intelligence and generative AI can transform logistics by optimizing supply chains with real-time pricing, predictive planning and enhanced safety while boosting sustainability. Quantum computing will accelerate innovation using advanced algorithms and faster computation to revolutionize logistics efficiency, traceability and crisis management. Robotics and humanoids will take over repetitive tasks, reduce costs and drive safety, requiring new social contracts and reshaping the workforce. – https://www.weforum.org/stories/2025/01/why-we-will-be-seeing-a-radical-reinvention-of-supply-chains/

Security

DeepSeek: China’s cheap ChatGPT rival AI hit by cyberattack amid meteoric rise

(Sujita Sinha – Interesting Engineering – 28 January 2025) On Monday, Chinese tech startup DeepSeek revealed that its platform was targeted by a massive cyberattack, disrupting user registrations during a crucial moment in the company’s rise. In an official statement, DeepSeek described the incident as “large-scale malicious attacks” on its services. While existing users faced no issues logging in, new users were unable to register. The timing of the attack has raised questions about potential motivations, as the company continues to make waves in the competitive world of artificial intelligence (AI). – https://interestingengineering.com/culture/deepseek-suffers-malicious-cyberattack

 

Quantum Computers Are Coming for Your Crypto Keys, But Not Yet

(Alex Haynes – Infosecurity Magazine – 24 January 2025) Quantum computing isn’t new, yet there is a fear that the computing power it can offer at a commercial level could be used by threat actors to break the private keys that a lot of digital interactions are based on. This includes breaking the private keys used to protect the wallets of many cryptocurrencies. While this is a legitimate risk and threat, it won’t happen overnight. However, it’s important to analyze where quantum computing stands in regard to its commercial offerings and whether it can really pose a threat to cryptocurrencies. – https://www.infosecurity-magazine.com/opinions/quantum-computers-crypto-keys/

Defending the Cyber Frontlines

(Samir Saran, Matthew Prince, Ravi Agrawal, Mirjana Spoljaric Egger, Andrius Kubilius, Joe Kaeser – WEF – 23 January 2025) From disruptions to critical infrastructure to cyber biothreats, geopolitical crises are extending into uncharted territories. How can the international community move towards a detente in cyberspace – https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2025/sessions/defending-the-cyber-frontlines/

Can National Security Keep Up with AI?

(Xue Lan, Nick Clegg, Katie Drummond, Ian Bremmer, Henna Virkkunen, Sir Jeremy Fleming – WEF – 23 January 2025) New technologies are reshaping the global security landscape, with countries and companies racing ahead with AI innovations alongside questions over dual-use risks, system misuse, or military applications. How can leaders from the private and public sectors work together to put in place safeguards for rapidly advancing technologies? – https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2025/sessions/where-does-civilian-ai-end-and-military-ai-begin/

Defense, Intelligence, and War

 

Through a Glass, Darkly: Transparency and Military AI Systems

(Branka Marija – Centre for International Governance Innovation – 29 January 2025) The need for transparency is often emphasized in international governance discussions on military artificial intelligence (AI) systems; however, transparency is a complex and multi-faceted concept, understood in various ways within international debates and literature on the responsible use of AI. It encompasses dimensions such as explainability, interpretability, understandability, predictability and reliability. The degree to which these aspects are reflected in state approaches to ensuring transparent and accountable systems remains unclear and requires further investigation. A paper examines the feasibility of achieving transparency in military AI systems, considers the associated challenges and proposes pathways to develop effective transparency mechanisms. Transparency efforts are one critical part of the broader governance and regulatory framework that needs to be developed for military applications of AI and autonomy. – https://www.cigionline.org/publications/through-a-glass-darkly-transparency-and-military-ai-systems/

 

CCA Signals a New Era in AI-Driven Air Combat 

(Gregory C. Allen, Isaac Goldston – Center for Strategic & International Studies – 28 January 2025) The Collaborative Combat Aircraft (CCA) program represents an enormous increase in DOD investment in AI-enabled and autonomous fighter aircraft and demonstrates the key role AI could play in the future of U.S. airpower. One of the biggest decisions the new DOD leadership will need to make is whether to go ahead with a manned Next Generation Air Dominance (NGAD) fighter. – https://www.csis.org/analysis/cca-signals-new-era-ai-driven-air-combat

 

Protecting European AI-Related Innovations: Preventing Their Use in China’s Military Advancements

(The Hague Centre for Strategic Studies – 20 January 2025) Over the past years, China’s rapid military modernisation has caused alarm in the United States, Asia and Europe. China’s self-stated goals express its intent to leap to a position of leadership and self-sufficiency in artificial intelligence (AI)-based and enabled technologies, with major implications for the military domain. If the US-China military balance of power in the Indo-Pacific definitively tips in China’s favour, this could have far-reaching consequences for security in East Asia, as well as globally. After all, both East Asian democracies and Europe rely on US military power for their protection. In turn, Europe depends on East Asia as the world’s manufacturing hub. In this context, the US and its allies, partly because of US pressure, have resorted to unilateral and plurilateral controls on exports, foreign investment screening and more restrictive knowledge security policies. – https://hcss.nl/report/protecting-european-ai-related-innovations-preventing-their-use-in-chinas-military-advancements/