A new study published in Nature Human Behaviour challenges the prevailing assumption that generative artificial intelligence behaves consistently across different languages, revealing instead that large language models (LLMs) exhibit distinct cultural tendencies depending on whether they are prompted in English or Chinese. Researchers Jackson G. Lu and Lu Doris Zhang examined two major models, OpenAI’s GPT and Baidu’s ERNIE, and found that the language of the prompt effectively switches the AI’s “cultural personality,” influencing how it interprets information, evaluates options, and frames strategic recommendations.
The research utilized frameworks from cultural psychology to measure two primary constructs: social orientation and cognitive style. When prompted in English, both models displayed an “independent” social orientation, valuing autonomy and self-direction, and an “analytic” cognitive style, characterized by a reliance on formal logic and rule-based reasoning. Conversely, when prompted in Chinese, the models shifted toward an “interdependent” orientation emphasizing social harmony and conformity, alongside a “holistic” cognitive style that prioritizes context and relationships over focal objects.
These divergences manifested in practical business scenarios. For instance, when asked to explain a person’s behavior, English prompts led the AI to attribute actions to the individual’s personality, whereas Chinese prompts resulted in attributions based on social context. In a marketing task, the models preferred slogans highlighting individual well-being when queried in English, but favored those emphasizing collective well-being when queried in Chinese. The study notes that simply translating an English-generated campaign for a Chinese market could therefore result in a cultural mismatch that causes the messaging to fall flat.
However, the researchers found that these biases are not immutable. By using “cultural prompts”—such as explicitly instructing the AI to adopt the perspective of an average person living in China—users could recalibrate the model’s English responses to mimic the interdependent and holistic patterns usually seen in Chinese responses. To manage these hidden biases, the authors advise organizational leaders to treat language choice as a strategic decision, align prompt languages with the target audience’s cultural context, and utilize cultural persona prompting to guide the AI’s reasoning toward more appropriate insights.





