Weekly Digest on AI and Emerging Technologies (20 January 2025)

HIGHLIGHTS FROM DAILY DIGEST – WEEK 13 TO 17  JANUARY 2025

 

Governance and Legislation

A Big Tech race to the bottom is bad news for everyone

(David Wells – Lowy The Interpreter – 16 January 2025) There are multiple explanations for last week’s changes in how Meta will moderate content across its platforms, including financial ones, the evolving views of CEO Mark Zuckerberg, and Meta’s policies simply swaying in line with the political pendulum. I would argue that his announcement has also been driven by lessons learned from Elon Musk’s takeover of Twitter more than two years ago. Back in late 2022, I wrote for The Interpreter about how Musk had exposed the fragility of collaborative approaches to countering terrorism online, warning that Twitter/X would provide Big Tech with a high-profile, lowest common denominator that would drag down content moderation standards. – https://www.lowyinstitute.org/the-interpreter/big-tech-race-bottom-bad-news-everyone

AI Will Write Complex Laws

(Nathan Sanders, Bruce Schneier – Lawfare – 16 January 2025) Artificial intelligence (AI) is writing law today. This has required no changes in legislative procedure or the rules of legislative bodies—all it takes is one legislator, or legislative assistant, to use generative AI in the process of drafting a bill. In fact, the use of AI by legislators is only likely to become more prevalent. There are currently projects in the U.S. House, U.S. Senate, and legislatures around the world to trial the use of AI in various ways: searching databases, drafting text, summarizing meetings, performing policy research and analysis, and more. A Brazilian municipality passed the first known AI-written law in 2023. That’s not surprising; AI is being used more everywhere. What is coming into focus is how policymakers will use AI and, critically, how this use will change the balance of power between the legislative and executive branches of government. Soon, U.S. legislators may turn to AI to help them keep pace with the increasing complexity of their lawmaking—and this will suppress the power and discretion of the executive branch to make policy. – https://www.lawfaremedia.org/article/ai-will-write-complex-laws

Unpacking the Biden Administration’s Executive Order on AI Infrastructure

(Clara Apt, Brianna Rosen – Just Security – 16 January 2025) The final days of the Biden administration have witnessed a flurry of activity on AI policy. On Jan. 14, President Biden issued a new executive order (EO) aimed at advancing U.S. leadership in AI  by building the next generation of AI infrastructure “in a way that enhances economic competitiveness, national security, AI safety, and clean energy.” The EO followed the Jan. 13 announcement of the “Framework for Artificial Intelligence Diffusion” designed to foster collaboration with U.S. allies while restricting the export of advanced AI chips. Together, the Executive Order (EO) and diffusion framework represent the administration’s final push to position the United States at the forefront of the global AI race. While ambitious, the EO faces significant challenges in reconciling environmental goals with economic priorities, underscoring the complexities of fostering sustainable AI innovation. As President-elect Donald Trump prepares to take office next week, the ultimate impact of the EO, which is not codified in law, is highly uncertain. Nevertheless, certain elements of the E.O. are likely to persist, particularly its emphasis on geothermal and nuclear power as key energy sources for expanding AI infrastructure in the United States. – https://www.justsecurity.org/106427/unpacking-the-biden-administrations-executive-order-on-ai-infrastructure/

FTC updates closely watched children’s online privacy rule

(Suzanne Smalley – The Record – 16 January 2025) The Federal Trade Commission (FTC) on Thursday announced updated online privacy protections for children that require opt-in consent from parents, who will have to explicitly authorize targeted advertising to their children.  The new rule, which will take effect 60 days after it is posted in the Federal Register, also sets strict parameters minimizing how long companies can hold on to children’s data. – https://therecord.media/ftc-coppa-childrens-data-privacy-updated-regulation

Guiding Opinions on Promoting the Development of New-Style Research and Development Institutions

(Center for Security and Emerging Technology – 14 January 2025) The following Chinese government policy document encourages the creation of “new-style R&D institutions,” which differ from traditional Chinese laboratories and research institutes in that they are not state-run and have additional sources of income besides government funding. Historically, Chinese state-run R&D labs have had difficulty converting research breakthroughs into commercially viable applications. Researchers at these new R&D institutes are allowed to profit from their inventions, giving them a stronger incentive to market their innovations. – https://cset.georgetown.edu/publication/china-new-research-institution-opinions/

USPTO debuts 2025 AI strategy

(Alexandra Kelley – NextGov – 14 January 2025) The U.S. Patent and Trademark Office’s new artificial intelligence strategy looks to bring AI technology into the operations that support expediting and supporting patent and copyright applications. Unveiled on Tuesday, the strategy unites five focus areas for the agency: advancing the development of intellectual property policies and promoting AI innovation and inclusion; building best-in-class AI capabilities through product and computational development; promoting the responsible use of AI internally; developing an AI expertise within the agency workforce; and engaging in interagency collaboration. – https://www.nextgov.com/artificial-intelligence/2025/01/uspto-debuts-2025-ai-strategy/402166/?oref=ng-homepage-river

AI tools can help reduce climate risks, State strategy says

(Edward Graham – NextGov – 13 January 2025) The U.S. can leverage artificial intelligence capabilities to help prepare for the long-term effects of climate change, the State Department said in an 898-page blueprint released on Friday. The department’s National Adaptation and Resilience Planning Strategy outlined the potential impacts of a warming planet on various industries and communities and identified steps that can be taken to mitigate these risks. – https://www.nextgov.com/artificial-intelligence/2025/01/ai-tools-can-help-reduce-climate-risks-state-strategy-says/402134/?oref=ng-homepage-river

Meta’s decision to ditch fact-checking gives state-sponsored influence operations more chance

(Meg Tapia – Lowy The Interpreter – 13 January 2025) Meta’s recent decision to dismantle its professional fact-checking program marks a significant shift in the company’s approach to moderating content across its platforms – including Facebook, Instagram, and Threads. The world’s most used social network company argues the move is a return to its “roots” of prioritising free expression. However, the consequences of Meta mirroring the X-created “community notes” model could have far-reaching consequences for national and regional security. – https://www.lowyinstitute.org/the-interpreter/meta-s-decision-ditch-fact-checking-gives-state-sponsored-influence-operations-more

New AI-export rule aims to ease sales to allies, limit leaks to others

(Patrick Tucker – Defense One – 13 January 2025) A groundbreaking new export regulation aims to keep some AI models and related chips out of adversaries’ hands while easing their sale to friendly nations, Biden officials said Sunday, a day before its official release. But America’s top chipmaker castigated the new rule as “sweeping overreach” that would stifle innovation. The so-called Interim Final Rule on Artificial Intelligence Diffusion seeks to streamline licensing hurdles for chip orders, bolster U.S. AI leadership, and help allied and partner nations understand how they can benefit from AI, officials said. – https://www.defenseone.com/policy/2025/01/new-ai-export-rule-aims-ease-sales-allies-limit-leaks-others/402125/?oref=d1-homepage-top-story

What to Know About the New U.S. AI Diffusion Policy and Export Controls

(Michael C. Horowitz – Council on Foreign Relations – 13 January 2025) In its waning days, the Biden administration, through the Commerce Department’s Bureau of Industry and Security (BIS), released an eagerly anticipated Regulatory Framework for the Responsible Diffusion of Advanced Artificial Intelligence Technology. The policy lays out a global framework to govern the export of frontier artificial intelligence (AI) technologies from chips to AI model weights from the United States to the world. The policy builds on previous policy releases focused on limiting exports of AI technology to the People’s Republic of China (PRC) and other countries of concern like Russia. The policy is designed to achieve two goals. First, it attempts to enable U.S. companies to export and lead in key global AI markets by reducing and streamlining current bureaucratic barriers to exports. Second, the policy further controls PRC access to the most advanced U.S.-based AI technologies through regulatory changes. – https://www.cfr.org/blog/what-know-about-new-us-ai-diffusion-policy-and-export-controls

Sovereign data strategies: Boosting or hindering AI development in India?

(Jyoti Panday – Observer Research Foundation – 10 January 2025) In an interconnected world, digital infrastructure, platforms, and services play a pivotal role in the functioning of everything, from communication to commerce. Given the importance of digital technologies, the pursuit of technological sovereignty has become a strategic imperative for some countries. For advocates of this approach, the state’s ability to control and govern digital assets, systems, and data is crucial to achieving the country’s economic, developmental, and security goals, while also strengthening its geo-political influence. The focus is on establishing or advancing domestic capabilities and attaining self-sufficiency in essential technological sectors by either limiting reliance on foreign entities or fostering “national champions”. – https://www.orfonline.org/expert-speak/sovereign-data-strategies-boosting-or-hindering-ai-development-in-india

Generative AI in Government: What to Expect in 2025

(Elizabeth Moon – NextGov – 10 January 2025) The year 2024 saw the public sector cautiously dipping its toes into the generative AI (gen AI) waters with pilot programs and experiments. Driven by the need to streamline operations and meet rising constituent expectations, these early initiatives demonstrated the potential of gen AI to deliver tangible value and ROI. Now, as we enter 2025, expect to see a significant shift from experimentation to widespread adoption. Gen AI is poised to fundamentally transform how government agencies operate, enabling new levels of efficiency and constituent-centric service delivery. – https://www.nextgov.com/ideas/2025/01/generative-ai-government-what-expect-2025/402071/?oref=ng-homepage-river

Chinese AI startups make gains in challenge to US-based OpenAI

(ThinkChina – 10 January 2025) For many of China’s emerging AI startups, the chief challenge is surviving the competition with established domestic giants that dominate market share and user access. – https://www.thinkchina.sg/technology/chinese-ai-startups-make-gains-challenge-us-based-openai?ref=top-hero

Measuring Changes Caused by Generative Artificial Intelligence: Setting the Foundations

(Samantha Lai, Ben Nimmo, Derek Ruths, and Alicia Wanless – Carnegie Endowment for International Peace – 9 January 2025) In 2024’s so-called year of elections, fears abounded over how generative artificial intelligence (GenAI) would impact voting around the world. However, as with other game-changing technologies throughout history, the sociopolitical risks of GenAI extend far beyond direct threats to democracy. As GenAI is leveraged to power “intelligent” products, made available for public use, adopted into routine business and personal activities, and used to refactor whole government and industry workflows, there are major opportunities for these disruptions to have negative consequences as well as positive ones. These consequences will be hard to identify for two reasons. First, GenAI is being integrated into already complex processes. When the outputs of such processes change, it can be hard to trace changes back to their root causes. Second, most processes—whether in industry, government, or our personal lives—are not sufficiently well understood to allow detection of changes, especially those that are just emerging. – https://carnegieendowment.org/research/2025/01/measuring-changes-caused-by-generative-artificial-intelligence-setting-the-foundations?lang=en&mkt_tok=ODEzLVhZVS00MjIAAAGX_ycrhcKE62XFH7NdnKIZP7-WsyyYKMUpMIQt78bTr7F7_yUfzK6FCALVl7d4vGlZj3eyrSHKiVRaPVGCDgwQu_PEVCTeUVcIo1e7hX_zZde-

AI Has Been Surprising for Years

(Holden Karnofsky – Carnegie Endowment for International Peace – 6 January 2025) AI presents a challenge for policymakers: a large number of potential risks have not emerged yet, but could emerge quickly. A first step toward navigating this challenge is recognizing that artificial intelligence doesn’t have the sort of stable, well-understood limitations it used to. – https://carnegieendowment.org/research/2025/01/ai-has-been-surprising-for-years?lang=en

Speaking in Code: Contextualizing Large Language Models in Southeast Asia

(Elina Noor, Binya Kanitroj – Carnegie Endowment for International Peace – 6 January 2025) Southeast Asia’s developers have sought to democratize AI by building language models that better represent the region’s languages, worldviews, and values. Yet, language is deeply political in a region as multiculturally diverse and complex as Southeast Asia. Can localized large language models truly preserve and project the region’s nuances? – https://carnegieendowment.org/research/2025/01/speaking-in-code-contextualizing-large-language-models-in-southeast-asia?lang=en

How Artificial Intelligence Will Affect Asia’s Economies

(Tristan Hennig, Shujaat Khan – IMF blog – 5 January 2025) Asia-Pacific’s economies are likely to experience labor market shifts because of artificial intelligence, with advanced economies being affected more. About half of all jobs in the region’s advanced economies are exposed to AI, compared to only about a quarter in emerging market and developing economies. – https://www.imf.org/en/Blogs/Articles/2025/01/05/how-artificial-intelligence-will-affect-asias-economies

Security

Browser-Based Cyber-Threats Surge as Email Malware Declines

(Alessandro Mascellino – Infosecurity Magazine – 15 January 2025) Browser-based cyber-threats have surged throughout 2024, marking a significant shift in the tactics employed by malicious actors. According to new findings from the 2024 Threat Data Trends report by the eSentire Threat Response Unit (TRU), while malware delivered via email declined last year, browser-sourced threats, including drive-by downloads and malicious advertisements, rose sharply. – https://www.infosecurity-magazine.com/news/browser-cyberthreats-surge-email/

Critical Infrastructure Urged to Scrutinize Product Security During Procurement

(James Coker – Infosecurity Magazine – 14 January 2025) Critical infrastructure organizations have been urged to take action to ensure their operational technology (OT) products are secure by design. Government agencies from the Five Eyes intelligence and security alliance, alongside European partners, issued a joint advisory on January 13 to critical infrastructure firms setting out the key security considerations when purchasing OT products. The guidance is designed to ensure OT owners and operators choose products and manufacturers that follow secure-by-design principles which reduce the likelihood of damaging attacks occurring. – https://www.infosecurity-magazine.com/news/critical-infrastructure-product/

Managing the Security Risks of Geoengineering

 

(Erin Sikorsky, Tom Ellison – Lawfare – 14 January 2025) As each year is hotter than the last and climate disasters pile up around the world, interest grows in geoengineering, particularly solar radiation modification (SRM)—techniques designed to cool the planet artificially. In September 2024, the British Advanced Research and Invention Agency announced it would fund new geoengineering research, including outdoor experiments, to the tune of 57 million pounds (approximately $75 million). This effort adds to the growing pot of money from philanthropists and tech entrepreneurs directed toward such research in recent years, including a new program the Environmental Defense Fund began in June 2024. – https://www.lawfaremedia.org/article/managing-the-security-risks-of-geoengineering

WEF Warns of Growing Cyber Inequity Amid Escalating Complexities in Cyberspace

(James Coker – Infosecurity Magazine – 13 January 2025) Cyber inequity has widened in the past year amid increasing complexities in cyberspace and geopolitical uncertainties, to the World Economic Forum (WEF)’s Global Cybersecurity Outlook 2025 has found. The WEF found that there is substantial disparity in the capabilities of different businesses, sectors and regions to effectively respond to cyber-attacks. – https://www.infosecurity-magazine.com/news/wef-cyber-inequity-complexities/

Defense, Intelligence, and War

Don’t blow the budget on ChatGPT: Army CIO sounds alarm on big bills for GenAI

(Sydney J. Freedberg Jr. – Breaking Defense – 15 January 2025)The generative AI explosion that began with ChatGPT has led some Army organizations to run up big and unexpected bills, the service’s chief information officer told reporters Tuesday. Getting GenAI costs under control will be a major focus for a forthcoming rollout of best practices and new policies, expected by April, Leonel Garciga said. But in the meantime, said Garciga, maybe think a little harder before you click. – https://breakingdefense.com/2025/01/dont-blow-the-budget-on-chatgpt-army-cio-sounds-alarm-on-big-bills-for-genai/

Human-Machine Interaction and Human Agency in the Military Domain

 

(Ingvild Bode – Centre for International Governance Innovation – 15 January 2025) Militaries increasingly use artificial intelligence (AI) technologies for decision support and combat operations. AI does not replace humans, but personnel interact with AI technologies more frequently. Practices of human-machine interaction have the potential to profoundly alter the quality of human agency, understood as the ability to make choices and act, in warfare. Specifically, they introduce distributed agency between humans and machines. Forms of distributed agency will be shaped along a spectrum, preserving more room for either human or machine agency. Such practices happen in multiple locations and with multiple, networked systems. Accounting for the phenomenon of distributed agency requires going beyond perceiving challenges of human-machine interaction as straightforward problems to solve. Rather, distributed agency needs to be recognized as raising foundational operational, ethical-normative and legal challenges. – https://www.cigionline.org/publications/human-machine-interaction-and-human-agency-in-the-military-domain/

Frontiers

China Building Infrastructure For Attosecond Lasers

(Matt Swayne – Quantum Insider – 13 January 2025) China is building a state-of-the-art attosecond laser facility to observe ultrafast particle behavior and drive innovation in science and technology, Guangdong Today reports. The Advanced Attosecond Laser Infrastructure (AALI), spanning sites in Dongguan and Xi’an, will feature 10 beamlines and 22 research terminals, enabling breakthroughs in fields like quantum computing and biomedicine. Attosecond lasers, capable of capturing electron motion at quintillionths of a second, provide unprecedented precision for studying microscopic phenomena and fostering high-tech industries, including quantum computing. – https://thequantuminsider.com/2025/01/12/china-building-infrastructure-for-attosecond-lasers/

(Amber Corrin – NextGov – 10 January 2025) The global space economy, comprising all activities and resources underpinning the space market and space-enabled solutions, is on a trillion-dollar trajectory by 2033, according to a new report. Novaspace’s 11th edition Space Economy Report, released Jan. 9, predicts significant growth in “downstream applications” as a key driver behind growth from a $596 billion economy in 2024 to $944 billion by 2033. Downstream applications include space-based solutions and services delivering data and capabilities on Earth, such as navigation and mapping, communications and Earth observation, per the report. – https://www.nextgov.com/emerging-tech/2025/01/global-space-economy-tracking-toward-944b-2033-report/402102/?oref=ng-homepage-river

El Capitan supercomputer is ready to handle nuclear stockpile and AI workflows

(Alexandra Kelley – NextGov – 10 January 2025) The El Capitan supercomputer at Lawrence Livermore National Laboratory was officially dedicated to U.S. national security missions and nuclear stockpile management. During El Capitan’s dedication ceremony at Lawrence Livermore National Laboratory in California, stakeholders in both the public and private sector highlighted El Capitan’s distinction as the world’s fastest supercomputer, boasting the ability to compute over 2.79 exaflops of data. The ceremony itself reiterated the $600 million dollar machine’s focus on stewarding management of the U.S. nuclear stockpile and other national security research areas, along with broader scientific topics. – https://www.nextgov.com/emerging-tech/2025/01/el-capitan-supercomputer-ready-handle-nuclear-stockpile-and-ai-workflows/402088/?oref=ng-homepage-river