New research at MIT suggests that could be the case. A new report from the university’s Sloan School of Management covers some of MIT’s studies involving agentic AI, including an exploration into how these digital entities can be trained to reason and collaborate more like humans. For example, a new paper co-authored by Matthew DosSantos DiSorbo and researchers Sinan Aral and Harang Ju presented both people and AI with the same scenario: You need to purchase flour for a friend’s birthday cake using $10 or less. But at the store, you discover flour sells for $10.01. How do you respond? 92% of the people given this question proceeded to buy the flour. But AI models, spread across thousands of iterations, chose not to buy, concluding the price was too high. “With the status quo, you tell models what to do and they do it,” Ju said. “But we’re increasingly using this technology in ways where it encounters situations in which it can’t just do what you tell it to, or where just doing that isn’t always the right thing. Exceptions come into play.” The researchers found that providing models with information about both how and why humans opted to purchase the flour — essentially giving them insight into human reasoning — corrected this problem, giving the models a degree of flexibility. The AI models then made decisions like people, justifying their choices. The models were able to generalize this flexibility of mind to cases beyond purchasing flour for a cake, like hiring, lending, university admissions, and customer service.