It doesn’t seem so long ago that when someone mentioned artificial intelligence, you knew that a reference to The Terminator was on its way.
AI was only an occasional news story and few of us were all that concerned. Fine, then, to indulge the fantasy that a race of muscular, chiselled Arnie-bots might one day - one day - arise and turn their guns on us…
Now, AI has truly arrived: from ChatGPT to online services like Amazon working it into their websites - albeit that proud boasts of ‘powered by artificial intelligence’ are sometimes made for what seem like pretty ordinary search functions (more PR than AI, perhaps).
Arnold Schwarzenegger in The Terminator (1984) - the first 18-rated film I ever saw, though that’s by the by.
Our image of AI has changed, too: from good old Arnie to rather scrawnier, casually-dressed ‘tech bros’ vying to outdo one another. They model a curious kind of ultra-modern masculinity, combining cutting-edge ideas and engineering with ancient instincts. Mark Zuckerberg and Elon Musk have famously agreed to settle their differences not with high-concept conversation - but in a cage fight.
For all the joy that such moments bring, the implications are very real of allowing a narrow demographic to wield great power in developing artificial intelligence. We talked about this on the BBC’s Free Thinking programme a few months ago: ‘AI and Asian Stereotypes’. Well worth a listen!
Fortunately, people right around the world are working on AI. And an interesting theme that’s beginning to emerge is the role of culture in shaping how AI is imagined and developed. Big questions come into play. What is intelligence? Who are we, as human beings?
Until relatively recently, artificial intelligence research in the west seemed to operate with a definition of ‘intelligence’ as a capacity for narrow, abstract reasoning. Artificial General Intelligence (AGI) was duly imagined in fiction as a disembodied, emotionless, super-rational entity. Part of the threat it posed came from that lack of body or feelings: it could get anywhere, and do anything. Humans had somehow to build a digital and regulatory cage before it popped into existence.
Zuckerberg and Musk duking it out - as imagined in the New York Post
In Japan, the word for artificial intelligence is jinkō chinō (人工知能). Jinkō means ‘artificial’, in quite a straightforward way. Chinō, on the other hand, carries the sense not just of narrow reasoning but a broader capacity that incorporates wisdom (chie) and practical talent or ability (sainō). It suggests a combination of head and heart: close, perhaps, to the definition of the ancient Greek word nous offered by Rowan Williams, in conversation a while back with
.I find it fascinating that some of the most notable AI innovations in Japan so far have involved embodied and emotionally-responsive technology. You can see some of the results on display at Tokyo’s Miraikan, if you have a chance to visit. They include AI systems, placed into robotic bodies, which are dedicated to reading human facial expressions and looking for signs of stress in voice tone and word choice. The results are far from perfect - I’ll come to that in a moment. But the goal is for AI-powered robots to become able accurately to read and respond to human emotions.
What’s more, there’s an acceptance amongst prominent Japanese developers – even a welcoming of the prospect – that these AI will not so much serve human beings as strike up a symbiotic relationship with us. Just as the AI learns from us and our emotions, so we, by interacting with it, deepen in our ability to care for others.
Much of this is industry and government PR, of course: robots have been presented to the Japanese public for many decades, now, as a means of guaranteeing the country’s future. And there is always a danger of leaning too heavily on cultural or religious explanations for why Japan might do AI differently from the West.
Still, I wouldn’t want to discount explanations of the cultural kind altogether. Japan has a tradition of automata stretching back to mechanized puppets - karakuri ningyō - in the Tokugawa era (17th - 19th centuries). And it’s well-established that futuristic postwar manga like Astro Boy helped to inspire generations of robotics engineers in Japan - contributing also to a sense amongst the general public that robots are compassionate, unthreatening helpers in life.
Left: a karakuri ningyō. Right: ‘Astro Boy’, created in the 1950s by Osamu Tezuka
There’s a philosophical dimension to this, too, which I think mirrors the reception of Darwinian evolution in Japan, back in the late nineteenth century
Some in the West struggled with evolution early on, because it seemed to threaten humanity’s pride of place in the cosmos - made in the image and likeness of God. There was markedly less struggle in Japan.
In Shintō thought, it is possible for remarkable human beings to become gods (kami) after death. Evolution created problems for some traditionalist thinkers, as a result: it seemed blasphemous and disgusting to claim that the kami might have maggots and amoebae for ancestors.
But most Japanese had never imagined human beings at the centre of the cosmos, nor did they make the sort of strong distinctions within nature to which westerners were accustomed: humans vs animals; life versus inanimate matter. There was a sense instead – found, now, most colourfully in Hayao Miyazaki films – of life flowing through everything.
At the risk of extrapolating wildly from a single encounter, I remember having an argument many years ago with my wife, about vegetarianism. From her point of view, growing up in Okayama, there was life in a carrot and life in a cow. You might distinguish between the two on the basis of sentience – including the ability to feel anxiety and pain. But there was no deeper distinction between the two - a person would need to find other reasons to be vegetarian.
This inclusive attitude towards life and presence comes through in Qoobo the cat. Not quite a cat, in fact, but rather a fluffy grey cushion with a tail. As Hirofumi Katsuno and Daniel White point out, in a new book called Imagining AI: How the World Sees Intelligent Machines (2023) - the value of this not-quite-cat lies in two things: not trying to be a cat (since this will lead to disappointment when it fails to match up); and managing to exude a strong sense of physical, living presence.
Qoobo (Husky grey) - yours on ebay for about £150
The imagining of AI in terms other than super-brainy disembodied entities comes across, too, in the rise of semi-humanoid robots in Japan. ‘Pepper’ was launched almost a decade ago by SoftBank Robotics - a major player in the industry. There was much discussion when it was dressed up in Buddhist robes and programmed to officiate at funerals.
One of the reasons this ‘worked’, for those who thought that it did, was that in general, Asian religions place less emphasis on what a person believes – i.e. whether you or I are capable of formulating and assenting to key religious claims in our heads – than what a person does: rituals, conduct, how we move through life and interact with others.
Pepper might fall foul of a Buddhist inquisition. But it had presence, it ‘possessed the Buddha-nature’ (from a western philosophical point of view, you might say that it participated in Being), and unless it suffered some kind of technical malfunction it would be able to perform correct - and hence meaningful - ritual.
Pepper in Buddhist robes, chanting a sutra.
In China, too, the sense runs deep that human beings have an important role in the cosmos but we are not the unique, all-or-nothing forms of life found in the Judaeo-Christian tradition. Chinese Communism may more often appear in our headlines than Daoism, Buddhism or Confucianism. But these older philosophies still have currency - and all, in their different ways, picture human beings as participating in a larger social and metaphysical flow.
Chinese and Japanese thinking about AI shares something else in common, too: strategising - and fretting - about the future. Just recently, the Japanese government announced plans to expand Japan’s development of generative AI by twenty to thirty times by March 2028 (‘generative AI’ meaning systems, like ChatGPT, which produce content - text, images, etc). China, meanwhile, claims to be working hard on ‘cognitive AI’: systems that replicate human cognition.
Discussions of AI in the West often turn on what AI might mean for the labour market – are robots coming for our jobs? In Japan, too, there are those who worry in a similar way: AI, they say, threatens to accelerate the country’s fall away from what was once, back in the 1980s, a job-for-life economy - at least for men.
Elsewhere, companies blame shortages in the workforce for their turn to AI. Japan’s Mainichi newspaper has reported on the rise of ‘smart agriculture’: AI-equipped robots being used for tasks like checking to see whether it is the right time to harvest cucumbers.
An AI-powered robot inspects cucumbers in Saitama prefecture.
But one of the greatest challenges facing China and Japan is population ageing and shrinkage. It is estimated that by 2050, Japan’s population of over-65s will be the same size as its working-age population. This has led to an arms-race of sorts in developing high-quality ‘care-bot’ technology: robots capable both of being good company for the elderly, and of looking after them physically.
In theory, care-bots hold out the hope of dignified care for the elderly against the backdrop of an ever-tighter workforce - a problem exacerbated in Japan by high levels of opposition to immigration. And yet, a new book by James Wright - Robots Won’t Save Japan (2023) - suggests that the techno-optimism of the 2010s was a little misplaced. In some cases, care-bots actually create more work for human carers, who must wheel them between rooms in facilities that are not set up for care-bot tech.
It may be, Wright suggests, that for all the salvific symbolism that has surrounded robotics in twentieth-century Japan (and now accompanies AI-powered versions), the country’s real future may be a less happy one. Precariously-employed migrant workers with little opportunity to learn Japanese or other higher-level skills may find themselves looking after both elderly Japanese and care-bots. The latter may not exactly steal their jobs, so much as condemn them to working in lower-skilled ones, for poor pay, since higher-level skills like understanding and speaking Japanese can be handled by the AI inside the care-bots.
Here, as elsewhere, artificial intelligence is forcing fundamental questions. What would be the ideal size of population for a country like Japan, bearing in mind the impact of human beings on the climate? What sort of society does it want to be? This is no abstract, unhurried concern. It is a matter of here-and-now problems, highly emotional and deeply moral: how people in Japan, China and, in time, many parts of the West, ought to care for the elderly; how human beings and robots ought one day to work together.
Who knows whether cultural diversity in the development of AI will, in the end, play to humanity’s general advantage. One thing, however, seems certain. If the more doom-laden predictions about AI are accurate, and it will soon outpace us, we may be coming to the end of a period in history when these questions are ours alone to answer.
—
Thank you for reading! You now have four options…
1 - Walk away, humming, with your hands in your pockets.
2 - Put a smile on a man’s face by clicking the ‘heart’ button at the top of the page (or right below here, if you’re reading on the Substack app).
3 - Subscribe to IlluminAsia, to get it in your inbox every week.
4 - Really push the boat out and share this post with a friend:
Or, indeed, any combination of the four!
Images
The Terminator: IMP Awards (fair use).
Zuckerberg & Musk: New York Post (fair use).
Karakuri ningyō: Tokyo Weekender (fair use).
Astro Boy: via Miraheze (fair use).
Qoobo: as seen on ebay (fair use).
Pepper: Venture Beat (fair use).
AI-powered robot in Saitama: Mainichi (fair use).
Yes, indeed! AI is forcing fundamental questions. This is why“AI” (the air quotes on purpose) is a deeply fascinating topic to me. Perhaps at its core, it not about technology, but about how humanity perceives itself. “To be or not to be”, “I think, therefore I am”. Essentially a question of “how closely can this system mimic myself?” This human-likeness is, I believe, a stand-in for intelligence. Or, in other words, a system is perceived as intelligent, if it mimics a human as closely as possible. Therefore I have the feeling that at the moment vanity plays a big role how people perceive AI. Because we are, of course, the measure of all things. Didn’t God create us in his image? As far as I understand it, AI is still an elaborate probability equation. We can do it in regular databases (invented in the noughties), and it’s apparently not really a new concept. It tries to represent reality - or in practice a large set of text - in numbers. (Almost like the pursuit of the obsessed mathematician to represent life as an equation; to me beautifully rendered in Aronofsky’s film “Pi”) A calculator (on our wristwatches, concealed below our school-desks) far exceeded our capabilities. So do modern day computers. But since the day we can interact with them like we would with a human being, we perceive it as AI. Is it intelligent? No. But it“is” what we make of it.
So I believe that your thought “the goal is for AI-powered robots to become able to accurately read and respond to human emotions” is spot on. And that in effect the “philosophical dimension” and the emotional aspect to AI is not to be underestimated. “AI will not so much serve human beings as strike up a symbiotic relationship with us” as you say. To me this “AI” comes down to UX - another acronym that is thrown around: “User Experience”. So the question of is how naturally we can interact with a computer (vs a human being). The interface behaves “naturally” if it mimics human interaction. In theory, perfect. (Why all the abstraction via mouse, keyboards and icons… A voice command like“Computer: Earl Grey hot” is much more … practical)
Softbanks Idea to have the Pepper robot in stores to respond to customer queries (as opposed to a boring “wending machine” type interface) was really innovative - and “cute”. (Not meant as a derogative - and the way to go for) Also, the idea you have pointed out, to have robot care-givers for the elderly might have been too early, but is, in a way, the direction I would aim for. Dystopian, yes, as I would prefer a flesh-and-blood person, (and I did two years of civil service in a public care home for elderly people, which I would want to have missed) but indeed, “the goal is for AI-powered robots to become able to accurately read and respond to human emotions” is what we should stride for. And this emotional and social angle is something we have seen in Japanese IT companies for a while should be applauded. (I might have outed myself as an techno-optimist) However, the days when Rick Deckard (Blade Runner) has to “out” people as androids (and not being sure if he himself is the real biological deal) and we forge friendships with androids like Data (Star Trek) and fight for their “right to live” in court, are still far off, I believe. But age-old-problems (“age-old” as in the dawn of the the internet) like intellectual property and attribution of sources are probably much more a problem we have to deal with. Unfortunately, Deckard’s reality, where he had to question who is real (fact vs fake news) became more “present” than “future” as I would like to. Paired with potential threats to jobs, wages and livelihoods...
… I’m am somewhat embarrassed by the rambling length of my comment … please take it as an “I enjoyed to read your post” and “would love to read more”, and think about “what sort of society do we want to be” :)