On Wednesday, researchers at Microsoft released a new simulation environment designed to test AI agents, along with new research showing that current agentic models may be vulnerable to manipulation. Conducted in collaboration with Arizona State University, the research raises new questions about how well AI agents will perform when working unsupervised — and how quickly AI companies can make good on promises of an agentic future.
The simulation environment, dubbed the “Magentic Marketplace” by Microsoft, is built as a synthetic platform for experimenting on AI agent behavior. A typical experiment might involve a customer-agent trying to order dinner according to a user’s instructions, while agents representing various restaurants compete to win the order.
The team’s initial experiments included 100 separate customer-side agents interacting with 300 business-side agents. Because the source code for the marketplace is open-source, it should be straightforward for other groups to adopt the code to run new experiments or reproduce findings.
Ece Kamar, managing director of Microsoft Research’s AI Frontiers Lab, says this kind of research will be critical to understanding the capabilities of AI agents. “There is really a question about how the world is going to change by having these agents collaborating and talking to each other and negotiating,” said Kamar. “We want to understand these things deeply.”
The initial research looked at a mix of leading models, including GPT-4o, GPT-5 and Gemini-2.5-Flash, and found some surprising weaknesses. In particular, the researchers found several techniques businesses could use to manipulate customer-agents into buying their products. The researchers noticed a particular falloff in efficiency as a customer-agent was given more options to choose from, overwhelming the attention space of the agent.
“We want these agents to help us with processing a lot of options,” Kamar says. “And we are seeing that the current models are actually getting really overwhelmed by having too many options.”
The agents also ran into trouble when they were asked to collaborate toward a common goal, apparently unsure of which agent should play what role in the collaboration. Performance improved when the models were given more explicit instructions on how to collaborate, but the researchers still saw the models’ inherent capabilities as in need of improvement.
Techcrunch event
San Francisco | October 13-15, 2026
“We can instruct the models — like we can tell them, step by step,” Kamar said. “But if we are inherently testing their collaboration capabilities, I would expect these models to have these capabilities by default.”
Russell Brandom has been covering the tech industry since 2012, with a focus on platform policy and emerging technologies. He previously worked at The Verge and Rest of World, and has written for Wired, The Awl and MIT’s Technology Review. He can be reached at russell.brandom@techcrunch.com or on Signal at 412-401-5489.



