91Ë¿¹ÏÊÓÆµ

News

AI red lines: who sets the limits in national security?

Published: 9 March 2026

In a recent piece in The Conversation, Professor Emmanuelle Vaast examines the growing tensions between the use of artificial intelligence in warfare and surveillance. The discourse is prompted by a decision from AI company Anthropic, whose CEO refused to grant the U.S. military unrestricted access to its AI systems. The company established two "red lines": prohibiting the use of its technology for mass surveillance of citizens and preventing the deployment of fully autonomous weapons without having human oversight.

Professor Vaast explains that advanced AI models are increasingly capable of supporting military operations by analyzing intelligence, prioritizing targets, and recommending strategic actions, even if they do not directly control weapons. At the same time, AI-driven mass surveillance systems pose serious risks to privacy and civil liberties by combining large datasets, facial recognition and predictive algorithms to monitor populations. The article therefore raises a broader governance question about who should set limits on the use of AI in security contexts, arguing that relying on broad contractual language such as "all lawful purposes" is insufficient. As countries like Canada expand their AI capabilities and collaborate with allies such as the United States, stronger governance mechanisms will be necessary to ensure ethical safeguards extend beyond the decisions of individual companies or governments.

Feedback

For more information or if you would like to report an error, please web.desautels [at] mcgill.ca (subject: Website%20News%20Comments) (contact us).

Back to top