Weekly Digest on AI and Emerging Technologies (25 November 2024)

HIGHLIGHTS FROM DAILY DIGEST – WEEK 19 TO 22 NOVEMBER 2024

 

GOVERNANCE AND LEGISLATION

 

Senators call for watchdog to investigate TSA’s use of facial recognition

(Edward Graham – NextGov – 21 November 2024) A bipartisan coalition of lawmakers is asking the Department of Homeland Security’s internal watchdog to investigate the Transportation Security Administration’s use of facial recognition technology over concerns about the agency’s collection of biometric data. In a Wednesday letter to DHS Inspector General Joseph Cuffari, 12 senators — seven Democrats and five Republicans — called for a thorough review of how TSA uses facial recognition to verify travelers’ identities “from both an authorities and privacy perspective.” – https://www.nextgov.com/digital-government/2024/11/senators-call-watchdog-investigate-tsas-use-facial-recognition/401233/?oref=ng-homepage-river

 

Elections, Accountability, and Democracy in the Time of A.I.

(Rahul Batra – Observerer Research Foundation – 20 November 2024) This paper assesses how a transformational technology like Artificial Intelligence (AI) can be used by malicious actors to manipulate information and influence election results. It analyses the impact of such activities, and explores ways by which democratic polities can address this challenge. Reviewing cases from India and other countries in South Asia, and the United States, the paper also looks at the required regulatory landscape. It outlines recommendations straddling the strategic, tactical, and technical domains; and underlines the importance of public literacy. – https://www.orfonline.org/research/elections-accountability-and-democracy-in-the-time-of-a-i

 

GAO recommends new agency to streamline how US government protects citizens’ data

(Suzanne Smalley – The Record – 20 November 2024) The U.S. Government Accountability Office (GAO) on Tuesday published a report recommending that Congress create a new federal office to offer government-wide guidance and promote laws regulating how all agencies safeguard the public’s civil rights and liberties while using personal data. The recommendation follows a GAO survey of agencies which found vastly disparate approaches to protecting citizens’ personal data. The GAO is an independent government agency that aids Congress with auditing and investigations. – https://therecord.media/gao-recommends-new-agency-data-privacy-protections

The future of the US digital economy depends on equitable access to its jobs

(Robert Maxim, Mark Muro, Yang You, Carl Romer – Brookings – 19 November 2024) Over the past several years, emerging technologies such as generative artificial intelligence (AI) have dominated headlines, while industrial strategies centered on technologies such as semiconductors have become central to U.S. economic policymaking. Meanwhile, a growing stream of scholarship has shown that certain groups—including women and many workers of color—remain underrepresented in technology-oriented fields, despite the importance of diverse workforces for firm, industry, and national competitiveness. – https://www.brookings.edu/articles/the-future-of-the-us-digital-economy-depends-on-equitable-access-to-its-jobs/

The Role of the Middle East in the US-China Race to AI Supremacy

(Vincent Carchidi, Mohammed Soliman – Middle East Institute – 19 November 2024) Artificial intelligence (AI) is a pivotal catalyst for global innovation, with the United States at the forefront of the development of this transformative technology amid its ongoing great power rivalry with China. However, a notable concern has emerged: the absence of an explicit conception of AI supremacy that threatens to undermine the US’ long-term AI strategy. The notion of AI supremacy traditionally has been difficult to define, paralleling disputes about whether competition over AI is a “race.” This report thus aims to accomplish two objectives: first, to define AI supremacy and anchor this concept in the realities of the AI competition thus far; and second, to revise the US’ AI strategy in accordance with a more comprehensive understanding of AI supremacy. The AI race, unsurprisingly, has drawn in actors from the Middle East. The United Arab Emirates (UAE) and Saudi Arabia, especially, are pursuing the development of indigenous AI ecosystems, each seeking to attain the regional upper hand, throwing their capital behind their stated national AI aims. This report attempts to steer the conversation on the global AI race toward a comprehensive conception of AI supremacy that is anchored in the realities of international affairs and US-China great power competition. – https://www.mei.edu/publications/role-middle-east-us-china-race-ai-supremacy

Safer together: How governments can enhance the AI Safety Institute Network’s role in global AI governance

 

(Frank Ryan, George Gor, Niki Iliadis – OECD.AI – 18 November 2024) As we integrate AI into every facet of society—from healthcare to national security—it is more than a technical challenge to ensure that the technology is secure, safe, and trustworthy. It is a global imperative. While each country must address its own AI risks, the technology’s ever-growing reach across borders demands coordinated efforts. The recently launched International Network of AI Safety Institutes is one of the most promising initiatives to address this need. Launched in May 2024 at the Seoul AI Summit, the AISI Network’s mission is “to promote the safe, secure, and trustworthy development of AI.” While the effort is commendable, it is important to ask if such an ambitious, collaborative body can effectively govern a technology as dynamic and integral to national security and competitiveness as AI. – https://oecd.ai/en/wonk/ai-safety-institute-networks-role-global-ai-governance

Semiconductors, AI, and the Gulf: Policy Considerations for the United States

(Elizabeth Dent, Grant Rumley – The Washington Institute for Near East Policy – 18 November 2024) In May 2024, Microsoft and the Emirati firm G42 announced a $1 billion deal to invest in technology infrastructure in Kenya as part of a larger arrangement aimed at harnessing the power of such partnerships. The Kenya initiative—which includes a green data center, AI local-language research, cloud services, and other components—reflects both the promise and peril of integration in today’s era of great power competition. In the Gulf, where countries like Saudi Arabia and the UAE have made AI advancement central to their national development strategies, the United States will need to determine how best to provide the technology necessary for deepening cooperation while ensuring that an overprotective stance does not drive countries to the markets of adversaries. In this timely Policy Note, Elizabeth Dent and Grant Rumley describe how the current debate focuses on semiconductors, which are essential components for advanced computing and AI. They proceed to analyze how U.S. policymakers can navigate a range of options from permissive to restrictive when considering the export of semiconductors to the Middle East, especially Gulf countries. – https://www.washingtoninstitute.org/policy-analysis/semiconductors-ai-and-gulf-policy-considerations-united-states

Pivotal Powers 2024: Innovative Engagement Strategies for Global Governance, Security, and Artificial Intelligence

 

(Alexandra de Hoop Scheffer, Sharinee Jagtiani, Kristina Kausch, Garima Mohan, Martin Quencez, Rachel Tausendfreund, Gesine Weber – German Marshall Fund of the United States – 18 November 2024) States outside the transatlantic alliance have gained leverage in international affairs in recent years and, with that, the potential to significantly reshape the global order. Engagement with these “pivotal powers”, which include Brazil, Indonesia, India, Nigeria, Saudi Arabia, South Africa, and Türkiye, is of paramount importance for Europe and the United States. “Pivotal Powers 2024: Innovative Engagement Strategies for Global Governance, Security, and Artificial Intelligence” offers tactics for enhancing Western cooperation on global challenges with these countries. – https://www.gmfus.org/news/pivotal-powers-2024-innovative-engagement-strategies-global-governance-security-and-artificial

 

Generative AI and the Trough of Disillusionment

(Mardi Witzel — Centre for International Governance Innovation – 18 November 2024) The early excitement around generative artificial intelligence (AI) has recently been tempered with a heavy dose of risk-related concern and growing disappointment. Among the problems cited are a lack of killer apps, weak return on investment (ROI) and the general struggle to meet high expectations. This cooling presents a renewed opportunity for more mature AI technologies such as machine learning, as companies that have invested heavily in the infrastructure to exploit generative AI experience the friction that comes from both seeking and worrying about a new technology. Machine learning has been around for decades and is considered a branch of AI. Having spent the last two years firmly in the shadow of its flashier cousin, machine learning may seem a bit old and dull. But that would be selling it short. It could be argued that machine learning remains the more impactful, more understood and safer form of AI. – https://www.cigionline.org/articles/generative-ai-and-the-trough-of-disillusionment/

COP29 Digitalisation Day ‘hardwires’ the digital technology sector for climate action

​​​​(International Telecommunication Union – 16 November 2024) Global tech and environment leaders at COP29 have endorsed a declaration on boosting climate action with digital technologies while cutting the environmental impacts of those same technologies. In total, endorsements representing over 1,000 governments, companies, civil society organizations, international and regional organizations, and other stakeholders were received for the COP29 Declaration on Green Digital Action.​ – https://www.itu.int/en/mediacentre/Pages/PR-2024-11-16-cop29-declaration-green-digital-action.aspx

Acquiring AI Companies: Tracking U.S. AI Mergers and Acquisitions

(Jack Corrigan, Ngor Luong, Christian Schoeber – CSET – November 2024) lThe commercial artificial intelligence industry is evolving rapidly, and the competition dynamics in this burgeoning sector will impact the rate, diversity, and direction of AI innovation in the years ahead. Maintaining U.S. technological leadership in the years ahead will require policymakers to promote competition in the AI sector and prevent incumbent firms from wielding their market power in harmful ways. One important component of this effort will be monitoring mergers and acquisitions activity in the AI sector. M&A allows companies to gain access to talent, technologies, and other resources that may otherwise be out of their reach or too difficult to develop in-house. These transactions can allow firms to maintain their technological edge, gain economies of scale, and expand their business, all of which can drive growth and promote the healthy functioning of a market economy. On the flip side, however, M&A can also enable companies to entrench their economic power, reduce incumbent firms’ incentives to invest in innovation, and hamper the ability of new disruptive firms to enter the market. This brief seeks to shed light on major trends in M&A activity in the U.S. AI sector between 2014 and 2023. Our analysis is based on a dataset of 4,354 M&A transactions gathered through PitchBook, a third-party provider of corporate financial information. – https://cset.georgetown.edu/publication/acquiring-ai-companies-tracking-u-s-ai-mergers-and-acquisitions/

SECURITY

NIST sets up new task force on AI and national security

(Alexandra Kelley – NextGov – 21 November 2024) The National Institute of Standards and Technology set up a new task force within its existing Artificial Intelligence Safety Institute focusing on evaluating the myriad security implications of artificial intelligence models with inter-agency participation. Dubbed the Testing Risks of AI for National Security Taskforce, or TRAINS, the group consists of members from the Department of Defense — including its Chief Digital and Artificial Intelligence Office and the National Security Agency — the Department of Energy and its national labs; the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency; and the National Institutes of Health within the Department of Health and Human Services. – https://www.nextgov.com/artificial-intelligence/2024/11/nist-sets-new-task-force-ai-and-national-security/401214/?oref=ng-homepage-river

North Korea’s Cyber Strategy: An Initial Analysis

(Abhishek Sharma – Observer Research Foundation – 21 November 2024) North Korea is among the states that stand out for their often defiant behaviour, divergent from typical diplomatic niceties and non-compliant with widely accepted international liberal norms and rules. This ‘uniqueness’ is seen, for instance, in the country’s nuclear weapons development programme, which has been the object of global attention since the early 1990s. North Korea has now extended this behaviour to the cyber domain, marked by an increasing number of attacks by state-sponsored hackers against other states. Its development of cyber capabilities offers insights into the regime’s views on the importance of the cyber domain in contemporary warfare. This brief examines the drivers of North Korea’s cyber capabilities, gauging its successes and the risks it poses to countries, particularly the United States, South Korea, and Japan. – https://www.orfonline.org/research/north-korea-s-cyber-strategy-an-initial-analysis

Like biosecurity, cybersecurity is essential for rural industries

(Dean Frye  – ASPI The Strategist – 21 November 2024) When you enter Australia, you meet some of the strictest biosecurity screening in the world. Even domestically, if you travel to South Australia with any kind of fruit in your bag, you could be facing a $375 fine. These protocols may seem frustrating. But they’re crucial in keeping our unique environment and rural industries—such as food and agriculture—safe from biosecurity threats. But biosecurity is far from the only threat to rural industries. As these industries evolve and the adoption of new technologies and devices increases, we lack investment and understanding of less visible but equally damaging security threats such as cybercrime. – https://www.aspistrategist.org.au/like-biosecurity-cybersecurity-is-essential-for-rural-industries/

Do We Want an “IAEA for AI”?

 

(Akash Wasil – Lawfare – 20 November 2024) In November 2023, nations at the first global AI Safety Summit recognized the possibility of “serious, even catastrophic harm” from advanced artificial intelligence (AI). Some of the risks identified stem from deliberate misuse. For example, a nation could decide to instruct an advanced AI system to develop novel biological weapons or cyberweapons; Anthropic CEO Dario Amodei testified in 2023 that AI systems would be able to greatly expand threats from “large-scale biological attacks” within two to three years. Other risks mentioned arise from unintentional factors—experts have warned, for instance, that AI systems could become powerful enough to subvert human control. A race toward superintelligent AI could lead to the creation of highly powerful and dangerous systems before scientists have developed the safeguards and technical understanding required to control them. – https://www.lawfaremedia.org/article/do-we-want-an–iaea-for-ai

“Privacy by Design” Lessons for “Security by Design”

(Justin Sherman – Lawfare – 19 November 2024) The U.S. 2023 National Cybersecurity Strategy called out the need for companies to “embrace security and resilience by design” when building technology and pledged that the federal government would work to realign market incentives to boost security by design. Since the strategy’s release, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has put out several white papers, blogs, and guidance documents elaborating on the concept of “security by design” and encouraging companies to integrate the processes and practices into their operations. The U.S. government is making important investments—yet is still in its early days of elaborating on this concept. CISA’s preferred practices are fairly accessible—yet are all recommended, not required. Many open questions and challenges remain at conceptual, operational, technical, and regulatory levels. – https://www.lawfaremedia.org/article/privacy-by-design–lessons-for–security-by-design

Many US water systems exposed to ‘high-risk’ vulnerabilities, watchdog finds

(Jonathan Greig – The Record – 18 November 2024) Nearly 100 drinking water systems across the U.S. have “high-risk” vulnerabilities in the technology they use to serve millions of residents, according to a new report from a federal watchdog. The Environmental Protection Agency’s Office of Inspector General conducted a review of the agency’s cybersecurity initiatives, using an algorithm to rank issues at specific water utilities across the U.S. revolving around email security, IT hygiene, vulnerabilities, adversarial threats, and malicious activity. – https://therecord.media/us-water-systems-exposed-vulnerabilities

AI Safety and Automation Bias. The Downside of Human-in-the-Loop

(Lauren Kahn, Emelia Probasco, Ronnie Kinoshita – CSET – November 2024) Automation bias is the tendency for an individual to over-rely on an automated system. It can lead to increased risk of accidents, errors, and other adverse outcomes when individuals and organizations favor the output or suggestion of the system, even in the face of contradictory information. Automation bias can endanger the successful use of artificial intelligence by eroding the user’s ability to meaningfully control an AI system. As AI systems have proliferated, so too have incidents where these systems have failed or erred in various ways, and human users have failed to correct or recognize these behaviors. This study provides a three-tiered framework to understand automation bias by examining the role of users, technical design, and organizations in influencing automation bias. It presents case studies on each of these factors, then offers lessons learned and corresponding recommendations. – https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/

DEFENSE, INTELLIGENCE, AND WAR

Five Innovations that Make Defence Procurement Faster and Cut Cost and Risk

(Trevor Taylor, Linus Terhorst – RUSI – 20 November 2024) GCAP’s management involves five innovations that should drive success in its technology development and timeline. They also have the potential to transform the UK approach to major development, production and support programmes – if government is willing to change how it approaches project financing. – https://www.rusi.org/explore-our-research/publications/commentary/five-innovations-make-defence-procurement-faster-and-cut-cost-and-risk

 

FRONTIERS

 

Researchers Find That Moving Vehicles Could Use Quantum Information to Coordinate Actions

 

(Matt Swayne – Quantum Insider – 20 November 2024) Researchers at the University of Kent demonstrated that quantum information could be used to coordinate the actions of moving devices, such as drones or autonomous vehicles, potentially improving logistics efficiency and reducing delivery costs. By simulating the phenomenon on IBM’s superconducting quantum computer, the team showed that two devices sharing entangled qubits can influence each other without direct communication, even when separated. The study, published in New Journal of Physics, highlights a novel application of quantum computing to enhance coordination between devices and explores the practical challenges of implementing these strategies on current hardware. – https://thequantuminsider.com/2024/11/20/researchers-find-that-moving-vehicles-could-use-quantum-information-to-coordinate-actions/

 

Lawrence Livermore’s El Capitan supercomputer is officially fastest in the world

(Alexandra Kelley – NextGov – 18 November 2024) The El Capitan supercomputer housed at Lawrence Livermore National Laboratory in California was officially named the fastest supercomputer in the world, processing a peak of 2.7 exaflops and able to perform 1.742 quintillion calculations per second, a 20-fold increase over the lab’s flagship system, Sierra. Announced in a press call on Sunday, El Capitan’s verification was the result of a collaboration between the National Nuclear Security Administration, Lawrence Livermore National Lab, Hewlett Packard Enterprise and IT company AMD. – https://www.nextgov.com/emerging-tech/2024/11/lawrence-livermores-el-capitan-supercomputer-officially-fastest-world/401113/?oref=ng-home-top-story

 

EU’s New Technology Chief Prioritizes Quantum for Europe’s Sovereignty

(Matt Swayne – Quantum Insider – 18 November 2024) Henna Virkkunen, the incoming EU Executive Vice-President for Tech Sovereignty, Security, and Democracy, outlined an ambitious vision for European leadership in quantum technologies, including a potential EU Quantum Act to address market fragmentation and foster coordinated investments. During her parliamentary confirmation hearing, Virkkunen emphasized the importance of quantum technologies for EU sovereignty, competitiveness, and defense capacities, proposing a long-term EU Quantum Chips Plan to position Europe as a global leader. Virkkunen’s proposals align with the Quantum Flagship SRIA 2030 roadmap and were endorsed by industry leaders like the European Quantum Industry Consortium, signaling strong support for Europe’s strategic push toward becoming the world’s first ‘Quantum Valley.’ – https://thequantuminsider.com/2024/11/18/eus-new-technology-chief-prioritizes-quantum-for-europes-sovereignty/

Neuroscientist Heather Berlin on What Conscious AI Could Never Be

 

(James Dargan – AI Insider – 17 November 2024) In her TEDxKC talk, neuroscientist Dr. Heather Berlin confronted one of the most provocative questions of our age: what would a conscious AI look like? Her conclusion is strikingly definitive: in its current form, AI may mimic what humans do, but it will never be what humans are. Berlin explores the profound implications of this distinction and the potential for humanity’s future as it merges with technology. “AI doesn’t have experiences,” Berlin stated, framing her argument. “It’s not aware of itself like we are. It’s not conscious. Or is it?” This paradox underscores the crux of her talk: while AI’s abilities are accelerating at an astonishing rate, its essence remains fundamentally distinct from human consciousness. – https://theaiinsider.tech/2024/11/17/neuroscientist-heather-berlin-on-what-conscious-ai-could-never-be/

OpenAI Unveils US AI Strategy Blueprint Featuring Economic Zones, Private Investment, and a North American Alliance to Rival China

 

(James Dargan – AI Insider – 16 November 2024) OpenAI has outlined its ambitious “blueprint for U.S. AI infrastructure,” underlining AI economic zones, private investment in government-backed projects, and leveraging nuclear power expertise, according to a document reviewed by CNBC. The blueprint aims to position the U.S. as a global leader in AI while competing with China’s rapidly advancing initiatives. – https://theaiinsider.tech/2024/11/16/openai-unveils-us-ai-strategy-blueprint-featuring-economic-zones-private-investment-and-a-north-american-alliance-to-rival-china/