As malicious actors become more adept in their attempts to circumvent international nuclear nonproliferation safeguards, the United States government has invested in research to better detect bad actors’ malign activities. To address threats to nonproliferation, agencies like the International Atomic Energy Agency (IAEA) employ careful monitoring techniques to make sure nuclear materials subject to agreements aren’t used to produce nuclear weapons. They also employ sophisticated forensics methods to determine the origin of nuclear materials recovered by law enforcement. However, these techniques are often time and labor-intensive.
New research from Pacific Northwest National Laboratory (PNNL) uses machine learning, data analytics, and artificial reasoning to make threat detection and forensic analysis in the nuclear domain easier and faster. By combining these computer techniques with expertise in nonproliferation and safeguards, PNNL works to discover innovative methods by conducting research and turning the research into actionable real-world solutions.
PNNL nonproliferation analyst Benjamin Wilson is in a unique position to merge these data analytics and machine learning techniques with nuclear analysis. As a former safeguards inspector for the IAEA, Wilson knows exactly what kind of information the IAEA looks for in order to expose actors’ possible malign activities.
“Preventing nuclear proliferation requires vigilance,” said Wilson. “It involves labor, from audits of nuclear materials to investigations into who is handling nuclear materials. Data analytics-driven techniques can be leveraged to make this easier.”
With support from the National Nuclear Security Administration (NNSA), the Mathematics for Artificial Reasoning in Science (MARS) Initiative, and the Department of Defense, PNNL researchers are working on several projects to make nuclear nonproliferation and safeguards more effective. A few highlights from their recent research follow.
Detecting diversion of nuclear materials
Nuclear reprocessing facilities take used nuclear fuel and separate it out into waste products. The products are then used to produce compounds that can be recycled as new fuel for nuclear reactors. These compounds contain uranium and plutonium—which could be used to produce nuclear weapons. The IAEA monitors nuclear facilities to make sure that none of the nuclear materials are diverted to nuclear weapons. This typically involves regular inspections as well as sample collection for subsequent destructive assays.
“We could save a lot of time and labor costs if we could create a system that detects abnormalities automatically from the facilities process data,” said Wilson.
In a study published in The International Journal of Nuclear Safeguards and Non-Proliferation, Wilson worked with researchers from Sandia National Laboratories to build a virtual replica of a reprocessing facility. They then trained a machine learning model to detect process data patterns representing the diversion of nuclear materials. In this simulated environment, the model showed encouraging results. “Though it is unlikely that this approach would be used in the near future, our system provides a promising start to complement existing safeguards,” said Wilson.
Analyzing texts for indications of nuclear proliferation
Who is researching nuclear materials? Is that research consistent with that state’s international agreements and declarations? Where did they acquire the specialized equipment they are using? Where are nuclear materials currently being used for research?
These are the types of questions IAEA analysts work to answer every day. To find those answers, they usually need to spend many hours reading through research articles and manually sifting through a lot of data.
PNNL data scientists Megha Subramanian and Alejandro Zuniga along with Benjamin Wilson, Kayla Duskin and Rustam Goychayev are working to make this task much easier through research which was featured in The International Journal of Nuclear Safeguards and Non-Proliferation.
“We wanted to create a way for researchers to ask nuclear domain-specific questions and receive correct answers,” said Subramanian.
The team developed a machine learning tool based on Google’s BERT: A language model trained on data from Wikipedia for general knowledge queries. Language models allow computers to “understand” human languages—they can read texts and extract important information from them, including context and nuances. People can ask BERT questions, like “what is the capital of France?” and receive the correct answer.
While the Wikipedia-trained model excels in answering general knowledge questions, it lacks knowledge of the nuclear domain. Therefore, the team created AJAX—Artificial Judgement Assistance from text—to fill this knowledge gap.
“While AJAX is still in its early stages, it has the potential to save analysts many hours of working time by providing both a direct answer to queries and the evidence for that answer,” said Subramanian. The evidence is especially intriguing to the researchers, as most machine learning models are often described as “black-boxes” which do not leave a trail of evidence for their answers, even if they are correct. AJAX seeks to provide auditability by retrieving documents that contain evidence.
“When the domain is as important as nuclear proliferation detection, it is critical for us to know where our information is coming from,” said Subramanian.
Using image analysis to determine the origins of nuclear materials
Sometimes, a law enforcement agent, in the United States or elsewhere, will encounter nuclear material that is outside of regulatory control and of unknown origin. In these instances, finding out where the material came from and/or its origins of creation is crucial since the material recovered may be only a portion of the material that is at risk of being trafficked. In case of such incidents, the IAEA maintains a database and encourages countries to cooperate and collaboratively combat illicit nuclear trafficking in order to strengthen nuclear security. Forensic analysis of nuclear materials is one analysis tool used in this vital effort.
PNNL researchers, in collaboration with the University of Utah, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, developed a way to use machine learning to aid in the forensic analysis of these samples. Their method uses electron microscopy images to compare microstructures of different nuclear samples. Different samples contain subtle differences that can be identified using machine learning.
“Imagine that synthesizing nuclear materials was like baking cookies,” said Elizabeth Jurrus, MARS initiative lead. “Two people can use the same recipe and end up with different-looking cookies. It’s the same with nuclear materials.
”The synthesis of these materials could be affected by many things such as the local humidity and the purity of the starting materials. The result is that nuclear materials produced at a specific facility end up with a specific structure—a “signature look” that can be seen with an electron microscope.
“Our collaborators at the University of Utah built a library of images of different nuclear samples,” said Alexander Hagen, co-lead author of the study published in the Journal of Nuclear Materials. “We used machine learning to compare images from their library to unknown samples to figure out where the unknowns came from.
”This could help nuclear analysts figure out the source of a material and guide further investigations.
learning to keep citizens safe
Though it may take some time before agencies like the IAEA adopt machine learning techniques into their nuclear threat detection process, it is clear that these technologies can impact and streamline the process.
“Though we don’t expect machine learning to replace anyone’s job, we see it as a way to make their jobs easier,” said Jurrus. “We can use machine learning to identify important information so that analysts can focus on what is most significant