(C) U.S. State Dept This story was originally published by U.S. State Dept and is unaltered. . . . . . . . . . . Digital Press Briefing with Paul Dean, Principal Deputy Assistant Secretary in the Bureau of Arms Control, Deterrence, and Stability [1] [] Date: 2024-05 MODERATOR: Greetings from the U.S. Department of State’s Asia Pacific Media Hub. I would like to welcome journalists to today’s on-the-record briefing with Principal Deputy Assistant Secretary Paul Dean in the Bureau of Arms Control, Deterrence, and Stability. PDAS Dean will discuss responsible military use of AI and autonomy in the Indo-Pacific and collaborative solutions for addressing Chemical Weapons Convention compliance concerns. He will also address the importance of transparency by nuclear weapons states under the Nuclear Nonproliferation Treaty and strategic risk reduction to promote regional and global stability. With that, let’s get started. PDAS Dean, I’ll turn it over to you for your opening remarks. MR DEAN: Great, thank you so much and thanks, everyone, for taking the time to join us. Good morning, everybody. As was said, I’m Paul Dean. I’m the Principal Deputy Assistant Secretary here in Washington for Arms Control, Deterrence, and Stability at the State Department. Really interested in this conversation, and we’ve been doing as a bureau a lot of very constructive engagement in the Asia Pacific region. Most recently, I led a delegation to Singapore and Vietnam for some diplomatic discussions on the areas that were flagged, especially on military artificial intelligence and the opportunity and need to work together to create a normative framework of responsible behavior governing how states will begin to use this new and emerging technology in its military applications. We had great discussions on the need to really bolster the regime prohibiting chemical weapons and support the Organization for the Prohibition of Chemical Weapons in the face of some growing threats on the chemical weapons side that I think it’s important for the international community to project strong resolve to maintain what has been a very important norm of stability for many decades. And of course, finally, as the U.S. Arms Control Bureau, I think it’s important for all states to explore ways we can work together constructively to advance nuclear arms control and nuclear stability. Certainly for our part, President Biden has been very clear that we view it as our responsibility to preserve the remaining pillars of nuclear stability and to make progress in this area. I think it’s the responsibility of all states possessing nuclear weapons to make progress. And so I had really excellent conversations with our partners on this, and looking forward to this discussion and taking your questions. So I think with that brief frame, certainly happy to take some questions. Over. MODERATOR: Thank you, PDAS Dean. We will now turn to the question-and-answer portion of today’s briefing. And our first question will go to Masakatsu Ota from Kyodo News, Tokyo, Japan, who submitted his question in advance. “Was there any progress on the most recent working group discussion on AI and its military use between the U.S. and China? If so, can you see any chance on the horizon to control AI’s application to nuclear command and control? Also, what is the most recent update on arms control dialogue with Russia, especially on post-New START option?” MR DEAN: So, all good questions. I do think that there is a real opportunity right now as countries increasingly turn to artificial intelligence to establish what the rules of responsible and stabilizing behavior will look like. And I think it’s – this is a conversation, sort of apropos of your question, that I think all major militaries and major economies – like the United States and China – have to deal with. But I would also emphasize that all countries have a stake and a role in creating this normative framework. So the way forward, as we have seen it, is really exemplified by the political declaration on responsible uses of military AI and autonomy that we and now 54 partners have joined together to endorse. That group does not include China, but we are open to collaborative, constructive contributions from any country who’s ready and willing to play a meaningful role in creating this normative framework. The political declaration, as you probably know, it reflects 10 basic rules of behavior governing both how countries will conduct process and legal reviews. It will ensure there is no accountability gap in the use of artificial intelligence in the military, and ensure that the applications are designed and used according to rigorous technical specifications, with some designs built in to ensure that there can be safeguards and that the technology can be used in a responsible way. I think this is going to – this technology will really revolutionize militaries across a range of applications. And I would emphasize here that the issue is not limited to battlefield use but these technologies will be used by militaries across the entire range of their operations on efficiencies and logistics, decision making. And I think this presents great promise and I think there’s significant upside in this, but of course, as with all new technologies, there are risks if the technology is not used in a responsible way. And so the political declaration and its rules of behavior are aimed at guiding states in using the technology in a stabilizing and responsible way. And we were very, very pleased to be joined with this group of 54 endorsing states here in Washington last month in a first annual plenary meeting of the countries who’ve endorsed the declaration to really chart an ambitious work plan to implement these measures and to ensure that we’re building our collective capacity and we’re raising awareness and capabilities to discharge the commitments that this group of countries has undertaken. I wanted to really, before we leave this topic, point out that in the Asia Pacific region, Singapore, the Republic of Korea, and Japan are key, key partners in this enterprise and gave really outstanding interventions at the plenary meeting and are really committed, as are we, to driving this work forward in concrete, capacity-building ways. On the question about nuclear stability, this is, again, something that we remain deeply committed to, but I think as your question really points to, we need a willing partner to make progress on this issue. This is not anything that one country can do alone. These are negotiated outcomes, and the stability in many ways results from the open channels of communication, doctrinal clarity, and mutually beneficial, mutually accepted limits and restrictions that are part and parcel of arms control agreements. So right now on New START, the Russian Federation has decided to cease implementation of this important bastion of nuclear stability. This is a decision that we, of course, profoundly regret and we urge the Russian Federation to resume its implementation of the New START Treaty. This is something that, certainly in our view, is in both sides’ continued interest to have nuclear stability and, indeed, as I said on the very top, I think it’s a bilateral treaty but this is a global issue and it’s in everyone’s interest to ensure that the nuclear states are managing that nuclear relationship in a stable and responsible way. And that’s something that the United States is deeply committed to. Thanks. MODERATOR: The next question goes to Albert Lee from the Overt Defense, Kuala Lumpur, Malaysia, and he put his question in the Q&A. “What metrics decide which Indo-Pacific countries are engaged first for reaching further agreements on use of military AI and autonomy? Does a nation’s potential candidacy for AUKUS Pillar II affect the interest the U.S. has in pursuing a deal?” Over. MR DEAN: We are very interested in having a broad and cross-regional group of endorsing states join us in implementing this political declaration. We value a diversity of perspectives and, indeed, if you look at the group of 54 countries that have joined, there are representatives from every UN regional group and representatives that are at different stages in their development and use of artificial intelligence in a military context. And that’s by design, because we really want the political declaration and the measures of responsibility it reflects to reflect a genuine international consensus and we want it to work for all stakeholders. And so we want to have the perspectives of partners. We want to have regional perspectives. We want to have diverse economic perspectives, and we really have set out quite intentionally to build a coalition that reflects that diversity. And so the answer to the question is that we really welcome in the Asia Pacific region wide participation from states. And what I have found in my engagements all over the world on this issue is that there is broad interest in working together to establish what the rules of responsible behavior will be for an emerging revolutionary technology. And I think opportunities like this don’t come around all that often, and I think states immediately understand that this is an opportunity to shape the international normative system on military artificial intelligence and to project our shared commitment to responsibility and stability in using this technology. Over. MODERATOR: The next question is from Christopher Woody from Bangkok, Thailand. “The U.S. is working with a number of Indo-Pacific countries to improve maritime domain awareness across the region. What role does the Biden administration see for AI and autonomous technologies in advancing these efforts?” Over. MR DEAN: So I think that AI – and I think this really does get to the point that we were discussing earlier that this technology will have profound transformational effects on militaries, and not only in a battlefield context. And so I think in domain awareness, for example, I would expect that AI will have profound implications for countries’ ability to conduct domain awareness. And so I think that’s extremely important. And similarly, I think it’s important that states coalesce around some basic rules of how to develop and use the technology. And if you look at the political declaration and the 10 measures contained in it, it really points in the direction of ensuring that there are some uniform principles of responsible behavior here that would apply to non-battlefield uses. And so just to give you one example, we – one principle in the political declaration is to ensure that AI applications are always developed and used for a specific intended purpose. And that sounds basic, but what we really do not want is we do not want countries purchasing AI applications and then incorporating them into their militaries to do something they were not specifically designed to do. We see a real risk of mistake and misinterpretation that’s potentially implicated by that. And so in the domain awareness context, we want to ensure that when countries do start using this technology, that it’s being used for specifically designed applications; that the AI application is not being asked to do something that it wasn’t specifically designed to do and that that the implementers of the application are specifically trained on how to use it, and that includes being trained on how to be aware of and resistant to, for example, automation bias. And so you’ll see, if you – when you look through the political declaration that these very fundamental, very basic norms of behavior that this group of 54 countries has now coalesced around really apply to a broad range of uses of artificial intelligence. And that is certainly by design, and we want to continue working to build out this group. We want to make sure that we are open to new countries endorsing and joining this effort to both build these rules of responsibility but also build our collective capacity to implement them, especially countries in the Asia Pacific region. Over. MODERATOR: Thank you, PDAS Dean. The next question is from Colin Clark, Breaking Defense, from Sydney, Australia. “U.S. policy has been, for some time, that a human must always be the final actor or decision maker in the AI kill chain. Given the speeds with which cyber attacks can occur, and the fact that they may not be kinetic, is the U.S. likely to press for a dual-track approach on such decision-making policy in its discussions with China?” Over. MR DEAN: I think it’s a – my answer to this one would be – and I’m not sure if you have nuclear weapons in mind or not when you’re asking this, but it really does bring to mind that we certainly have made a very clear and strong commitment that in cases of nuclear employment, that decision would only be made by a human being. We would never defer a decision on nuclear employment to AI. We strongly stand by that statement and we’ve made it publicly with our colleagues in the UK and France. We would welcome a similar statement by China and the Russian Federation. I think it’s in – we think it’s an extremely important norm of responsible behavior, and we think it’s something that would be very welcomed in a P5 context. Over. MODERATOR: Okay, our next question is from Fauzan Malufti from Jatosint Naval News, Jakarta, Indonesia. “Have any ASEAN countries formally requested assistance from the U.S. in adopting AI and autonomous technology in their military? What specific AI and autonomous technology solutions are the U.S. Government and defense companies focusing on for Southeast Asia?” Over. MR DEAN: And so here I would say that the real thrust of our work on the political declaration is to build a consensus around the rules of responsible behavior. This is not an effort aimed at building any specific technical capacity, but rather as these technical capacities emerge and states, of course, will begin to incorporate them into the militaries – and I think there is, as I said at the beginning, great, great promise in this; I think artificial intelligence applications and militaries will significantly help militaries deliver on their international humanitarian law obligations. They will help achieve better fact-based decision making. They will help achieve efficiencies. But the efforts that we have underway on the political declaration to build support and build capacity to implement these rules of behavior are – they’re normative in nature. These are not particular. They’re not technology-specific or application-specific. They are, if you will, the meta rules that, when they are adopted, will go a long way toward managing the risks that the technology could be used in an irresponsible or destabilizing way. That’s what we do not want. And so we have offered a set of rules that are now, as we’ve discussed, widely endorsed that when they – that can be applied to a range of AI applications in a range of contexts. And when they are applied, they do go a long way to ensuring that whatever the AI application is, it’s being designed, incorporated, used in a responsible way. Over. MODERATOR: Okay. And our next question is from Daniel Hurst from The Guardian, based in Australia. “The Australian Government is considering signing the Treaty on the Prohibition of Nuclear Weapons. Why has TPNW been so popular in Australia’s immediate region? And what message do you have for the Australian Government as it considers whether it can sign?” Over. MR DEAN: Well, I may be not in the best place to answer the first question on why it’s so popular in Australia, but I do think that it’s important that – I think to acknowledge that while I understand the frustration globally with the pace of disarmament, I think it’s important to really understand that there are no shortcuts available in nuclear stability and nuclear disarmament. And while the pace may not be the pace that – as rapid as one would want, we, for our part, are committed to making progress on our NPT obligations to pursue negotiations in good faith toward disarmament. But I think the key point here is that there are negotiations, and this is not something that any country can do unilaterally. And we need good-faith interlocutors. We need the Russian Federation to engage constructively in arms control. We have done this at multiple points in a very fraught relationship. The National Security Advisor has been quite clear that we remain prepared to engage without preconditions – that does not mean without accountability, but without preconditions – and we are prepared to give this issue the prioritization that it deserves, because we view it as our responsibility to make progress on nuclear stability and nuclear disarmament. I think, by the same token, we need a good-faith interlocutor on the side of the PRC to make some progress in managing a complicated nuclear deterrence relationship in a way that really minimizes the risk of misunderstanding, misimpression, and miscalculation that can lead to poor decision making and unnecessary and costly arms races. So I understand the international frustration with the pace of progress on this issue, and what I would say is that that frustration would be well projected toward where the real impediments to progress are. And it’s – for our part, we are ready to make progress, but we need good-faith interlocutors on the other side of the table. Over. MODERATOR: And unfortunately that’s all the time we have today for questions. PDAS Dean, if you have any closing remarks, I’ll turn it back over to you. MR DEAN: No, thank you, but I did want to say thank you to everyone for I think really excellent questions. Really appreciate everybody’s interest, and certainly always happy to talk about our work to promote norms of responsible behavior with respect to militaries, including the artificial intelligence issue, with respect to nuclear stability. And so very much appreciate everyone’s time and interest on these matters. Thank you. MODERATOR: And thank you, PDAS Dean, and thank you to our participants for your questions today. We will provide a transcript of this briefing to participating journalists as soon as it is available. We’d also love to have your feedback, and you can contact us at any time at AsiaPacMedia@state.gov. Thanks again for your participation and we hope you can join us for another briefing soon. [END] --- [1] Url: https://www.state.gov/digital-press-briefing-with-paul-dean-principal-deputy-assistant-secretary-in-the-bureau-of-arms-control-deterrence-and-stability/ Published and (C) by U.S. State Dept Content appears here under this condition or license: Public Domain. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/usstate/