• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Generative AI isn’t culturally neutral, MIT’s research finds; but these cultural tendencies can be adjusted through simple prompts

September 24, 2025 //  by Finnovate

A new study led by MIT Sloan’s Jackson Lu suggests generative AI models have cultural leanings. In their new paper, “Cultural Tendencies in Generative AI,” Lu and his co-authors — Lesley Song from Tsinghua University and Lu Zhang from MIT — emphasize that generative AI models’ cultural tendencies reflect the cultural patterns of the data they were trained on. “Our findings suggest that the cultural tendencies embedded within AI models shape and filter the responses that AI provides,” said Lu, an associate professor of work and organization studies at MIT Sloan. “As generative AI becomes part of everyday decision-making, recognizing these cultural tendencies will be critical for both individuals and organizations worldwide.” In their study, the researchers asked GPT and ERNIE the same set of questions in English and Chinese. The choice of languages was intentional. English and Chinese not only embody distinct cultural values but are also the world’s two most widely spoken languages, so the two languages provide extensive training data for generative AI. Importantly, neither AI model translates between languages when responding — Chinese prompts are processed directly in Chinese, and English prompts are processed directly in English. The researchers then analyzed the responses using two foundational dimensions from cultural psychology: social orientation and cognitive style. The results were clear: Both GPT and ERNIE reflected the cultural leanings of the languages used. In English, the models leaned toward an independent social orientation and analytic thinking. In Chinese, they shifted to a more interdependent social orientation and holistic thinking. When researchers asked generative AI to advise an insurance company on choosing between two advertising slogans, the recommendations differed for Chinese and English.  The study also found that these cultural tendencies can be adjusted through simple prompts.

Read Article

Category: Additional Reading

Previous Post: « California senate approves bill requiring AI companies generating over $500 million to publish mandatory incident reports and to provide whistleblower protections
Next Post: Ecer launches GEO Enhanced Mode using four-step AI optimization framework; enabling Chinese exporters to bypass traditional SEO for direct global buyer access »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.