A new study led by MIT Sloan’s Jackson Lu suggests generative AI models have cultural leanings. In their new paper, “Cultural Tendencies in Generative AI,” Lu and his co-authors — Lesley Song from Tsinghua University and Lu Zhang from MIT — emphasize that generative AI models’ cultural tendencies reflect the cultural patterns of the data they were trained on. “Our findings suggest that the cultural tendencies embedded within AI models shape and filter the responses that AI provides,” said Lu, an associate professor of work and organization studies at MIT Sloan. “As generative AI becomes part of everyday decision-making, recognizing these cultural tendencies will be critical for both individuals and organizations worldwide.” In their study, the researchers asked GPT and ERNIE the same set of questions in English and Chinese. The choice of languages was intentional. English and Chinese not only embody distinct cultural values but are also the world’s two most widely spoken languages, so the two languages provide extensive training data for generative AI. Importantly, neither AI model translates between languages when responding — Chinese prompts are processed directly in Chinese, and English prompts are processed directly in English. The researchers then analyzed the responses using two foundational dimensions from cultural psychology: social orientation and cognitive style. The results were clear: Both GPT and ERNIE reflected the cultural leanings of the languages used. In English, the models leaned toward an independent social orientation and analytic thinking. In Chinese, they shifted to a more interdependent social orientation and holistic thinking. When researchers asked generative AI to advise an insurance company on choosing between two advertising slogans, the recommendations differed for Chinese and English. The study also found that these cultural tendencies can be adjusted through simple prompts.