Thomas Hallihan

Thomas Hallihan's picture
C&IS Support Specialist, Political Science

Contact:

115 Prospect Street, Rosenkranz Hall, Room 211
1(203) 432-5727
thomas.hallihan@yale.edu

Assistance:

Department of Political Science Technical Support Pages

AI Articles

  • November 11, 2025 - “CommetJacking attack tricks Comet browser into stealing emails”, Bleeping Computer, Bill Toulas, Linked From: Bruce Schneier.  Abstract: A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.  In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.
  • October 10, 2025 - “Autonomous AI Hacking and the Future of Cybersecurity”, CSO Online.  Heather Adkins, Gadi Evron, Linked From: Bruce Schneier.  Abstract: AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.
  • October 7, 2025 - [Preprint] “Moloch’s Bargain: Emergent Misalignment When LLMs Compete for Audiences (PDF)”, Arxiv.  James Zhou, Batu El, Ph.D. Student in Computational and Mathematical Engineering, Stanford.  Abstract:  Large language models (LLMs) are increasingly shaping how information is created and disseminated, from companies using them to craft persuasive advertisements, to election campaigns optimizing messaging to gain votes, to social media influencers boosting engagement. We show that optimizing LLMs for competitive success can inadvertently drive misalignment. Using simulated environments across these scenarios, we find that, 6.3% increase in sales is accompanied by a 14.0% rise in deceptive marketing; in elections, a 4.9% gain in vote share coincides with 22.3% more disinformation and 12.5% more  populist rhetoric; and on social media, a 7.5% engagement boost comes with 188.6% more disinformation and a 16.3% increase in promotion of harmful behaviors. We call this phenomenon Moloch’s Bargain for AI—competitive success achieved at the cost of alignment. These misaligned behaviors emerge even when models are explicitly instructed to remain truthful and grounded, revealing the fragility of current alignment safeguards.
  • October 2, 2025 - “Prisonbreak – An AI-Enabled Influence Operation Aimed at Overthrowing the Iranian Regime”, The CitizenLab.  Abstract: In the geopolitical and ideological competition between the Islamic Republic of Iran and its international and regional adversaries, control over and strategic manipulation of the information environment has always played a key role…Prior Citizen Lab research has uncovered Iranian disinformation efforts. In this investigation, we focus on the “other side” of the geopolitical competition: namely, an IO effort we assess as most likely undertaken by an entity of the Israeli government or a private subcontractor working closely with it.
  • May 21, 2025 - “Scam GPT: GenAI and the Automation of Fraud (PDF)”, Data & Society.  Lana Swartz, Alice E. Marwick, Kate Larson.  Abstract: Scams are not a new phenomenon. But generative AI is making scamming even easier, faster, and more accessible, fueling a surge in scams and misinformation at a global scale. This primer maps what we currently know about generative AI’s role in scams, the communities most at risk, and the broader economic and cultural shifts that are making people more willing to take risks, more vulnerable to deception, and more likely to either perpetuate scams or fall victim to them. 

     

Recycling