Computer science Professor V.S. Subrahmanian embarked on his groundbreaking research journey in artificial intelligence and counterterrorism nearly two decades ago, a venture that initially led many skeptics to label his ambitions as “nuts.” Subrahmanian, however, remained undeterred, convinced that the necessary data existed; it simply had not yet coalesced into actionable insights.
“I’m striving to remain one step ahead in this rapidly evolving landscape, rather than merely playing catch-up like most of the industry,” Subrahmanian emphasized, reflecting his proactive approach amid a challenging technological environment.
At the forefront of this research is the Northwestern Security & AI Lab, which Subrahmanian now leads. This dynamic institution focuses exclusively on pioneering projects addressing counterterrorism and the complexities of artificial intelligence.
Launched in collaboration with the Buffett Institute for Global Affairs and the McCormick School of Engineering, NSAIL has been operational for three years, pooling expertise from a diverse cadre of undergraduates, graduate students, postdoctoral researchers, and staff from various departments at Northwestern University.
On October 17, the lab proudly hosted its third annual Conference on AI & National Security, where researchers showcased an array of innovative projects that reflect NSAIL’s dedication to addressing contemporary security challenges through AI advancements.
The Drone Early Warning System project, presented by Valerio La Gatta, a postdoctoral researcher at NSAIL, zeroed in on predicting drone trajectories to assess potential threats, utilizing critical data provided by the Dutch National Police to enhance its efficacy.
Alongside drone trajectory analysis, NSAIL has undertaken comprehensive investigations into the implications of deepfakes—digital content meticulously altered to mislead. Notably, the lab’s Global Online Deep Fake Detection System serves as an essential tool for journalists, empowering them to discern the authenticity of content.
To engage with this innovative platform, journalists can sign up using an email associated with their publications and subsequently upload potential deepfake content for analysis. Currently, the system is accessible exclusively to journalists, with a user base of around 50 individuals actively utilizing the resource.
Subrahmanian’s collaborative efforts extend internationally, partnering with esteemed professors on pivotal projects. One such collaboration includes work with Daniel Linna Jr., director of law and technology initiatives at the Pritzker School of Law, exploring the critical questions associated with incorporating deepfakes into governmental foreign policy strategies.
“There are immense opportunities at this convergence of AI and law, capable of driving meaningful advancements, particularly in enhancing access to justice and promoting the rule of law. This prospect excites me immensely as a member of NSAIL,” Linna remarked, emphasizing the significance of their joint work.
He further noted that, contrary to common assumptions, distinguishing deepfakes from genuine content poses significant challenges, underscoring the importance of NSAIL’s research in this field.
Subrahmanian asserted that the United States has historically excelled in the realms of artificial intelligence and counterterrorism, pointing out significant advancements that have occurred since the inception of his academic career.
Echoing Subrahmanian’s insights, Tonmoay Deb, a second-year computer science Ph.D. candidate, noted the lab’s expansive collaborative reach enables the creation of truly remarkable innovations.
“We are at the forefront of this field, driven by merit and also by forging meaningful connections with stakeholders who genuinely care about these critical issues,” Deb articulated, highlighting the lab’s unique contributions to multidisciplinary research.
Email: [email protected]
— Northwestern professors pioneer multidisciplinary AI research
— Northwestern community members discuss use of AI program ChatGPT to write academic papers
— University Libraries offers workshop for students on generative artificial intelligence
**Interview with Professor V.S. Subrahmanian: Pioneering AI Solutions in Counterterrorism**
**Editor:** Professor Subrahmanian, you’ve been at the forefront of artificial intelligence and counterterrorism research for nearly two decades. When you first started this journey, did you anticipate the growing role of AI in both radicalization and counter-radicalization efforts?
**Subrahmanian:** I must admit, in the beginning, many didn’t believe it was feasible — they called it “nuts.” But I was convinced that the data needed for effective counterterrorism strategies existed; it just hadn’t come together in a meaningful way. Today, as we see AI being exploited by terrorist groups, it becomes even more critical to harness these technologies to counter such threats.
**Editor:** Your lab, NSAIL, has recently hosted its third annual Conference on AI & National Security. What were some of the key highlights from this year’s conference?
**Subrahmanian:** This year, we showcased several innovative projects. One standout was the Drone Early Warning System, which predicts drone trajectories using critical data from the Dutch National Police. It’s a game-changer for assessing potential threats posed by drones. Additionally, our work on detecting deepfakes—particularly the Global Online Deep Fake Detection System—is crucial for journalists today. It allows them to verify the authenticity of digital content, which is becoming increasingly important in our information-saturated age.
**Editor:** It’s fascinating that your Deep Fake Detection System is designed specifically for journalists. How do you envision this tool impacting the media landscape?
**Subrahmanian:** The media plays a vital role in shaping public perception and understanding, especially in national security matters. With the rise of misinformation through deepfakes, we wanted to empower journalists to discern fact from fiction effectively. By providing them access to this technology, we hope to foster a more informed public discourse and mitigate the risks associated with misleading content.
**Editor:** Your lab collaborates with various international experts. Can you share a bit about some of the collaborative projects currently in the pipeline?
**Subrahmanian:** Certainly! We have several exciting partnerships in progress, focusing on diverse aspects of AI in security. For instance, we’re working with colleagues on AI models to analyze and predict radicalization trends online, helping to identify potential threats before they escalate. This intersection of AI and socio-political dynamics is crucial for proactive counterterrorism efforts.
**Editor:** Given the rapid evolution of technology, how do you ensure that your research stays ahead of potential threats rather than just responding to them?
**Subrahmanian:** My approach has always been proactive. We need to anticipate how malicious actors might exploit emerging technologies, and we strive to develop countermeasures in tandem. By continuously researching and testing new AI applications, we aim to stay one step ahead in this dynamic field, rather than just reacting to the shadows that fall behind us.
**Editor:** Thank you, Professor Subrahmanian, for sharing your insights. It’s clear that your work at NSAIL is making a significant impact in the realm of national security through innovative AI technologies.
**Subrahmanian:** Thank you! It’s an honor to contribute to this field, and I look forward to sharing more advancements in the future.