This month, artificial intelligence robots slipped into Santa's grotto. For one thing, AI-powered gifts are on the rise, as I know myself, having just acquired an impressive AI-based dictation device.
Meanwhile, retailers like Walmart are offering AI tools to provide stressed-out shoppers with holiday help. Think of these things, if you will, as the digital equivalent of a personal elf, providing you with shortcuts to shopping and gifts. It seems to work well judging by recent reviews.
But here's the irony: Even as AI pervades our lives — and Christmas stockings — hostility remains sky-high. Earlier this month, for example, A British Government Survey It found that four out of ten people expect AI to bring benefits. However, three in ten expect significant damage, due to “data security” breaches, “spread of misinformation” and “job displacement”.
Perhaps this is not surprising. The risks are real and well publicized. However, as we move into 2025, it is worth considering three often overlooked points about the current anthropology of AI, which may help frame this paradox in a more positive way.
First, we need to rethink which letter “A” we use “artificial intelligence” today. Yes, machine learning systems are “artificial”. However, robots do not always—or usually do—replace our human brains as a substitute for flesh-and-blood perception. Instead, they usually enable us to work faster and move more effectively through tasks. Shopping is just one example of this.
So perhaps we should recast AI as “augmented” or “accelerated” intelligence — or, to use the buzzword, “acting” intelligence. Nvidia's latest blog It is called the “next frontier” of artificial intelligence. This refers to robots that can act as autonomous agents, performing tasks for humans at their command. It will be a major topic in 2025. Or as Google announced when it recently unveiled its latest Gemini AI model: “The era of artificial intelligence has arrived“.
Second, we have to think beyond the cultural framework of Silicon Valley. So far, the discussion about AI on the global stage has been “dominated by English-speaking actors,” academics Steven Cave and Kanta Dehal said. Note in the introduction to their book Imagine artificial intelligence. This reflects American technological dominance.
However, other cultures view AI a little differently. For example, attitudes in developing countries tend to be much more positive than in developed countries, says James Manyika, co-chair of a U.N. advisory body on artificial intelligence and a senior Google official,He recently told Chatham House.
Countries like Japan are also different. Notably, Japanese people have long expressed more positive feelings toward robots than their English-speaking counterparts. This is now reflected in attitudes about AI systems as well.
Why is this? One factor is Japan's labor shortage (and the fact that many Japanese are concerned about immigrants filling this gap, and thus find it easier to accept robots). Another is popular culture. In the second half of the twentieth century, when Hollywood films such as… finisher or 2001: A space journey They were spreading fear of intelligent machines among English-speaking audiences, and Japanese audiences were fascinated by them Astro Boy A saga that portrayed robots in a benign light.
Its creator is Osamu Tezuka This has been attributed previously To the influence of the Shinto religion, which does not draw strict boundaries between living and inanimate things, unlike the Judeo-Christian tradition. He had previously observed that “the Japanese make no distinction between man, the superior creature, and the world around him.” “We accept robots easily, along with the wide world around us, insects, rocks – it's all the same.”
This is reflected in how companies like Sony or SoftBank design AI products today, which is one of the articles featured in this article Imagine artificial intelligence Notes: They are trying to create “robots with a heart” in a way that American consumers might find scary.
Third, this cultural diversity shows that our responses to AI do not need to be set in stone, but can evolve, as technological changes and cross-cultural influences emerge. Think about facial recognition technologies. In 2017, Ken Anderson, an anthropologist working at Intel, and colleagues lesson They examined the attitudes of Chinese and American consumers toward facial recognition tools, and found that while the former accepted this technology in everyday tasks, such as banking, the latter did not.
This distinction appears to reflect American concerns about privacy issues. But in the same year this study was published, Apple introduced facial recognition tools on iPhones, which were quickly accepted by American consumers. Attitudes have changed. The point, then, is that “cultures” are not like Tupperware boxes, which are closed and static. They are like slow-moving rivers with muddy banks into which new streams flow.
So, whatever 2025 holds, the one thing that can be predicted is that our attitudes toward AI will continue to shift subtly as the technology becomes increasingly normal. This may worry some, but it may also help us reframe the discussion about technology more constructively, and focus on ensuring that humans control their digital “agents” – not the other way around. Today's investors may be rushing toward AI, but they need to ask what “A” they want in that AI tag.