While many colleges continue to debate whether artificial intelligence belongs in the classroom, adversaries are already using AI to manipulate perception, automate deception, poison data, and target belief systems at scale. The gap between academic caution and real-world threat environments is no longer theoretical. It is operational.
If artificial intelligence is shaping modern conflict, why is intelligence education still lagging behind it?
The simple answer is: it isn’t.
At Hilbert College, we chose to stop treating AI as a novelty or a risk-management footnote and start treating it as what it has become: a core element of modern intelligence tradecraft.
Hilbert’s Intelligence and Data Analysis program has built an integrated, multi-course pipeline that treats AI not as a tool students occasionally touch, but as a contested domain they must understand, exploit, and defend against. This is not a single elective or a surface-level survey. It is a deliberate curriculum designed around how intelligence works today.
Students begin with Generative AI for Intelligence Analysis and Strategic Forecasting, where they examine how large language models shape analytic judgment, forecasting accuracy, and decision-making. They learn to compare human analysis against AI outputs, identify bias and hallucinations, and apply structured analytic techniques to hybrid human–AI workflows. The focus is not speed or automation. It is analytic integrity under pressure.
From there, students move into AI-Driven Intelligence Collection, applying AI across HUMINT, SIGINT, OSINT, GEOINT, MASINT, and FININT. This course forces students to confront where AI strengthens collection—and where it introduces risk: false positives, adversarial manipulation, and overreliance on automated pipelines. Collection planning is treated as a thinking problem, not a software demonstration.
Then the program turns deliberately uncomfortable.
Cognitive Warfare: Applied AI and Narrative Power trains students to understand how AI is used to shape belief, emotion, identity, and perception. Narrative engineering, synthetic personas, algorithmic amplification. This course does not teach propaganda as history; it teaches influence as an active battlefield where attention is the terrain and psychology is the target. Students learn how to build, apply, and detect these systems, and where ethical boundaries must be enforced.
Finally, Adversarial AI and Algorithmic Deception completes the arc. Students learn how AI systems themselves are attacked: prompt injection, data poisoning, synthetic media, and model exploitation. They red-team AI-enabled intelligence workflows and design defenses that preserve analytic trust when the tools themselves become targets. This is defensive intelligence education for an era in which machines are no longer neutral.
Taken together, this curriculum does something rare in higher education. It treats AI as a contested operational environment, not just a productivity enhancer. It assumes students will face deception, manipulation, and adversarial pressure—and prepares them accordingly.
Many institutions are still asking whether AI should be allowed in intelligence education.
Hilbert is asking a different question:
If our adversaries are already weaponizing AI against perception, analysis, and trust, why would we educate the next generation of intelligence professionals as if that reality does not exist?
Contact Us
-
Jonathan Sullivan
Assistant Professor