The future of AI and Literature—Interview with Miyake Youichiro: “AI research rebuilds the world and intelligence” - Discuss Japan
Discuss Japan > Back Number > No.70 > The future of AI and Literature—Interview with Miyake Youichiro: “AI research rebuilds the world and intelligence”
Discussions, Culture, No.70  Jul. 14, 2022

The future of AI and Literature—Interview with Miyake Youichiro: “AI research rebuilds the world and intelligence”

“It is normal for everything to begin with chaos in the East.” — Miyake Youichiro
Photo: AlexBurakov / PIXTA

 

What does it mean for Artificial Intelligence (AI) to be beyond human power? We looked into the deep relationship between life and intelligence from the front lines of research.

Interview by Yamamoto Takamitsu and Yoshikawa Hiromitsu

Miyake Youichiro
Game AI developer

Yamamoto Takamitsu
Writer, game designer

Yoshikawa Hiromitsu
Writer, editor

“Intelligence” Has Not Been Defined

Yamamoto Takamitsu: For this special feature of the monthly literary magazine Bungakukai, Yoshikawa and I spoke with someone on “The future of AI and literature.” The current AI boom, called the Third Wave, has seen the term “AI” used widely, but it has become difficult to understand the nature of this buzzword.

 

Yoshikawa Hiromitsu: There is also a fear of a “singularity” occurring, where AI gains an intelligence that far surpasses humanity. A sense of anxiety is understandable, as there are a variety of fields where AI can show its power, such as in the board games of go and shogi (Japanese chess), machine translation, automatic text generation, and more. I’d like to take this opportunity to search for what AI really is from a fundamental place.

 

Yamamoto: Miyake Youichiro is the first interviewee for the series of interviews about “The future of AI and literature” (introduced only the first time), and he is active on the front lines as a game AI developer and an AI researcher. Despite the ebbs and flows of trends, digital games are an area where AI has continued to be developed from a comparatively early period. That is because in digital games, computers need to be good playmates for players. Games evolve varyingly based on the player. AI is what autonomously handles games in response to the situation. Miyake creates the characters and game worlds that appear in games. 

 

Miyake Youichiro: Thank you for the introduction.

 

Yamamoto: What I want to first ask you, Miyake, is what is the “intelligence” in “artificial intelligence”? Sorry for jumping right to the point.

 

Miyake: You may be surprised to know that the definition of “intelligence” is actually different in biology, philosophy, psychology, and other fields, and it is not yet defined. Therefore, “Artificial Intelligence” is also yet to be defined. I will answer your question based on this. There are both broad and narrow definitions for “intelligence.” Let’s start with the broad definition. There is a definition of “intelligence” as having intellectual function needed in a certain situation, for example being able to differentiate between apples and oranges, being able to open a door when a person arrives, being able to solve a puzzle, being able to use words, etc. 

On the other hand, the narrow definition according to AI experts defines “intelligence” as meeting three conditions: being aware of the world through senses, making decisions based on thought, and having an impact on the world.

If there is confusion about the definition of AI, it might be due to the fact that there is no distinction between the broad and narrow definitions of intelligence.

“Strong AI” and “Weak AI”

Yamamoto: When considering “intelligence,” there is a method for determining whether something is an intelligence or not by comparing it to something that is not an intelligence. Being able to make decisions based on thought is one of the conditions for the narrow definition of “intelligence,” but the human mind has emotion, will, and a variety of other mechanisms in addition to thought. Does the narrow definition categorize only the part associated with thought as intelligence? Or are emotion, will, and other mechanisms included in intelligence?

 

Miyake: A precise discussion is needed here. When considering the narrow definition of “intelligence,” there is a depth to intelligence. For example, when a single sensor reacts, an arm moves via a circuit. This is the shallowest meaning of intelligence. The richer the sensor becomes, the richer the thought. If the shallowest intelligence is one layer, then experts can overlay 10 or 20 layers. Or when speaking about humans, we are able to have within us internal structures such as the body, neural network, and brain.

When layering layers, do we arrive at something like a mind or soul as a result? What developed to answer this question were the concepts of “strong AI” and “weak AI.” To the public, this is interpreted as “strong AI” having deep intelligence and “weak AI” having shallow intelligence, but this is a misunderstanding. “Strong AI” is a philosophical position that considers AI as including mental activity, and “weak AI” is the position that AI is nothing more than an imitation by a machine. The “strong” and “weak” does not express AI’s functionality, but human philosophical positions. This debate has not yet been settled.

 

Yoshikawa: Hearing about the broad and narrow definitions reminded me that there was a practice robot at the table tennis practice hall I went to the other day. It was a robot that had no functions other than to oscillate and send the ball right or left. An elementary school boy saw these movements and asked his father if it was AI, to which his father replied, “Um, that may not be AI. If it read your abilities and character and threw a ball, then that might be AI.” We may be using the word “intelligence” in our everyday lives with both broad and narrow definitions.

Even if robots only move mechanically, we might arbitrarily imagine that those are intelligent movements. Even if it is a simple machine that just moves an arm when a single sensor reacts. However, when we dare to say, “AI,” we often imagine a much greater intelligence. The way we view “intelligence” itself is built into the pragmatic demands of our everyday lives at any given moment. That’s why it is difficult to define AI.

 

Miyake: There is an important point in what was just said. What I have thus far been talking about has been the depth of intelligence itself. With the opposite vector is a depth to how well humans can understand AI. This depth is roughly proportional to the depth of AI itself. If AI is shallow, humans can only have a shallow understanding, and if AI has a somewhat deep structure, then humans are able to hold an understanding that is proportionally deep.

For example, it is important for AI to be able to predict the actions of users in digital games. When doing so, there is technology that considers users to be AI. It assumes users will move in the same way as AI. In other words, AI imagines humans using its own intelligence. Humans are the same. If someone is quite clever, they can predict their opponent with corresponding depth, and if shallow, they predict in a shallow way.

Another important issue for the acceptance of AI is human instinct. Humans instinctually distinguish natural or artificial things, and can distinguish animals from nature. This is because there is the possibility that animals might inflict injury on us. This is surely an instinct from back when humans lived among the great outdoors. We instinctually turn if we hear a rustle in the grass. This is because if there is a bear there, we have to run away.

So what about robots with AI? They are both man-made and animal-like. This confuses humans. How do we watch out for a man-made object that moves? Humans feel an ambivalence towards AI: man-made objects are safe, but animals are dangerous.

 

Yoshikawa: That is really interesting. It makes me feel uncomfortable when Boston Dynamics’ humanoid robots and animal-type robots dance or do backflips because the robots are unclassifiable.

 

Miyake: Exactly. Should we think of them as man-made objects or animals? I think the person on the receiving end decides if they are safe or dangerous based on which they feel more strongly about. 

Japanese AI is a Companion

Yamamoto: Surely the difference between considering AI a companion or being afraid of it is deeply related to the culture of a country or region. What is the attitude towards non-human things and where are machines positioned in society? For example, even just comparing Japan with China, Europe, and America, the attitudes toward AI are surely different.

 

Yoshikawa: The inclination to pay attention to whether something is an animal or not is perhaps universal to all humankind. But subtle differences occur with the combination of local culture and customs.

 

Miyake: If we simplify this, in Japan, we say “yao yorozu no kami (literally meaning “eight million gods and goddesses”),” and there is a sense that we are all friends, including frogs and insects, Hatsune Miku (a virtual singer), and Tamagotchi (handheld digital pet). That’s why the AI created in Japan takes on a form that can be brought into the home, like aibo, a dog-shaped robot.

At the same time, in the West, God, humans, and AI are aligned vertically, with AI as the servant. That’s why it is easy for AI to become a part of society as a form of manpower. For example, we order smart speakers to turn on the power.

 

Yoshikawa: This hasn’t changed even in Blade Runner 2049 (2017). Even in 2049, God, human, and replicant are aligned vertically.  

 

Miyake: Yes. Social acceptance of AI is completely different based on local culture. That’s why the robots made in Japan surprise the world. “Huh? Why do you create dog-shaped robots?”

 

Yamamoto: “It doesn’t clean the house?” (laughs)

 

Miyake: Exactly. If we assume AI is a servant, then this kind of robot serves no purpose. Rather, a dog-shaped robot that needs to be cared for by humans must seem odd. That’s why aibo was a shock.

If we consider fiction and AI, there are many stories where AI rebels against humans in Western movies and novels, such as The Terminator and Metropolis. This is repeated in fiction because of the order of God, human, and AI. At the same time, in Japan, AI is crafted as a horizontally aligned friend like Doraemon.

It’s possible that even the same work can be viewed differently. Western people might look at R2-D2 in Star Wars as a servant, but Japanese people understand R2 as a Doraemon-like friend, and may think Luke to be cold towards the droid.

 

Yoshikawa: We have come to be familiar with various forms of minority literature, but something like postcolonial literature on AI may appear from the differences in the way different cultures treat AI. I want to read these kinds of essays in Bungakukai!

 

 Yamamoto: Me, too! (laughs) If we hope to make this postcolonial AI literature accessible to people from other cultures, we will need to start from theories on civilization. We have to understand the historical and cultural background behind why Japanese people make AI into a dog-shaped robot.

 

Yoshikawa: For example, cultural anthropology works like Black Rain (1989). When Nick (Michael Douglas) sees Japan’s police culture when visiting Osaka, he is perplexed by it.

 

Miyake: Even in Isaac Asimov’s The Foundation Series, we hear of human and robot detectives, but how should we look at this? Asimov makes robots into servants, but they are not written simply as servants. It feels like he understands them in a two-dimensional way.

When we look closely at these works, there are many parts that cannot be discussed only as cultural differences, and much of the works depend on the creator’s personal views. Even Western artists are not always found in the heart of society, and they may write thinking that while society views them as servants, they are different.

 

Yamamoto: For example, there may be people who hold a view of AI that calls for equality from the perspective of having been driven away from their country, like an exiled literary person.

 

Miyake: There have also been discussions on whether or not to grant human rights to AI. From theories on technology and society, questions have been raised about whether or not there is social bias in the definition of AI itself. For example, the creator is part of an elite layer, and has not defined AI that suits him or her. I think discussions will become more heated, discussing what the definition of a socially equal AI is, for example.

AI Within Games

Yamamoto: Based on what we have discussed thus far, I think a problem emerges when considering the digital games and AI that Miyake researches and develops. In other words, suppose we create an MMORPG (Massively Multiplayer Online Role-Playing Game) where many users gather in the same world and play together regardless of nationality, age, or social status. Within this game, characters controlled by AI also appear. In a place where players from different cultures gather, or said another way, to people who hold different views of AI, what kind of AI are we able to offer?

 

Miyake: It’s very difficult. Even if we say, “game AI,” there are three types and they work together like a government separated into three branches: the “Meta-AI” which directs the game while controlling and looking over the entire game, the “Character AI” which controls characters, and the “Spatial AI” which supports spatial awareness.

For example, when the Meta-AI interferes with game users, what kind of a position should it take? While there may not be any resistance from Japanese users if the user is to follow various commands given by AI, non-Japanese users may not be able to stand it. Games are a luxury and users don’t need to tough it out. If it is something that requires the user to tough it out, then I think it ought to not be done at all.

 

Yamamoto: You can always throw out the game if you think you can’t do whatever the task is.

 

Miyake: That is why it is easy for the user’s cultural background to be exposed. But what is difficult is Character AI. Especially friendly characters. As they spend more time together, users are more sensitive to friendly AI than opponent AI. Non-Japanese people think, “I’m the leader so follow me,” but Japanese people want the AI to be a friend.

In addition to cultural differences, there are also individual differences. For example, let’s think about when an autonomous Character AI tries to knock out an opponent with the user. When the player tries to deal the finishing blow, the friendly AI instead knocks the opponent out from beside the player. For some, this will be fine, but there are others who will not like this.

 

Yoshikawa: It would be like, “You took away the best part!”

 

Miyake: Is it okay for the AI to anticipate the user’s actions or should it stick to assisting the user? Or should it assist well without the user knowing? We have learned that the problems facing game development up until now have anticipated the problems contained in AI that operates within games.

Actual AI is man-made, but the AI within games exists as a character in the same way as a player does. This is something unique to digital games. The distinction between man-made and real goes away, and users interact with AI as “animals” just like the user. It is a confined world where man and AI can exist as equals.

Also, AI within games that autonomously moves in real time is a field that has quickly developed in game development.

How do users feel about the research and development of digital games? I think that the field of games has been one where we have continued to think about people. AI continues to appear and we are also researching how people accept AI. We take the results of researching people and give feedback to the AI. We now know that games are useful for human research and AI research. GAFA and other companies have created simulation games recently where people and AI can join, and are moving forward with AI research using these games.

 

Yamamoto: That’s because if we create such a space, traces of the actions, thoughts, and desires of the people who join will be left behind in great quantity as data.

People are Robust Against Contingencies

Yoshikawa: I get the sense that game developers face the fundamental question of how to entertain users. Early game developers might have been inspired by dystopian literature or SF, and surely many modern literary people get their ideas from game developers.

 

Miyake: I think that “subject” is a common point between literature and AI. Digital games trace their roots to ELIZA from 1966. It was an AI created for counseling, and it allowed one to have a simple conversation with a computer. Using this technology, a text based role-playing game (RPG) was created. With text, the user was asked, “Will you go west or east?” and the user made a selection. This was also a form of “dialogue” between user and computer.

Later, graphics appeared and we stopped seeing AI as a subject to speak to people. Then characters appeared, and speaking went from being part of the game mechanics to being part of the characters. As a result, AI as the subject of dialogue became more and more hidden within games. However, naturally, as it is a program, it is the AI as the subject of dialogue that makes characters speak.

Then AI appeared in games. Just as I explained before, AI cooperates in three separate ways so it has become difficult to know where the AI is as the subject of dialogue.

So how can we make this AI smarter as the subject of dialogue that is hidden in the background from the users’ perspective? One way would be to have the best story for the user branch off and change the world itself. It would be great for the AI as the subject of dialogue to create stories just like an author. A better understanding of the user is required to make this AI as the subject of dialogue smarter. This AI as the subject of dialogue is sometimes called “Meta-AI.”

 

Yamamoto: This conversation reminds me of online advertising. It is a type of advertising that appears assuming a user will want something based on traces of web activity. But it is often something that has already been bought or something with a sense of déjà vu, and it doesn’t function well. Frankly speaking, I have never been drawn into this kind of advertising. In other words, if we use data related to user behavior as materials, it is possible that we will continue to offer images that are not of interest to the user. Users want to enjoy playing digital games. To enjoy a game, there need to be surprises that the user can’t imagine. How can we create these surprises with AI? I think that it is the role of human to bring about surprises, but what do you think?

 

Miyake: What AI is good at is segmenting and understanding the world. If humans can divide a problem into one hundred parts, then AI can divide it into ten billion parts as an image. AI will surely get better at go and shogi. But it cannot go beyond the world of go and shogi, nor can it go outside the world of games. The “frame problem” has long been said to be the limit of AI. Problems are set by people and AI cannot exist outside that world. There is no world outside of cleaning for cleaning robots. They do not even consider suddenly helping someone or accepting a package.

When thinking about the creativity to offer surprises as Yamamoto said, there is an overwhelming strength to do so in humans. This is because there are many contingencies in the world we live in and human intelligence has developed through this. Many unanticipated things occur, such as earthquakes, typhoons, and accidents. Humans are creating intelligence amidst these limitless possibilities. The term “robustness” refers to how many of these contingencies you can live with. Delicately solving problems with limited rules, such as in go, and surviving unexpected contingencies require completely different skills. AI is better than humans with detail, but humans are overwhelmingly superior with contingencies and it is something that AI cannot deal with. Even in literature, there is an appeal to the unpredictable quality of jumping to this development. Humans can create this kind of surprise because we have the intelligence to survive amidst contingencies, and AI cannot overcome this.

 

Yoshikawa: Or, as it were, the characteristics of Kojima Nobuo (1915–2006) (a Japanese novelist who wrote works of an allegorical abstract world in an esoteric style).

The Internet World is Drying Up

Yamamoto: If we say that human intelligence is supported by contingencies in the world, then Miyake is trying to make this a reality within games. This means it is necessary to create a place that offers contingencies to the world the characters are active in, rather than just within character AI.

 

Miyake: That’s exactly right. If we want to create a deep intelligence, then we must also make the world have depth. For example, in the world of puzzle video games like Tetris, even if we try to create a deep intelligence, it cannot do anything more than stack blocks. It may become good at operating Tetris but it does not count as the kind of intelligence I aim for.

I believe there are a variety of indicators of the depth of a world, but even if we look at only contingencies, it is difficult to create digitally. We say that the internet is vast, but no matter how much information the internet world has, it is drying up. That is because there is no world but only the shadow of a world, aka information, so thought cannot have depth. Humans are smart because there is the real world, but I do not think the 3D virtual world of metaverse can become a deep intelligence as is. There are no contingencies to that world, no limitless resolution, and no world that can change without limit. Even though I know this, as a game developer, I want to create a world with depth for digital games.

 

Yamamoto: Because there are contingencies, there is a rich, real world that offers surprises and unknowns. It is a little paradoxical to try and create such a world with a cluster of inevitability known as a program, a series of commands for a computer.

 

Miyake: At the same time, humans have a desire to simplify the world. Even though we lived among complex, rich forests, we created cities and fled from nature, creating computers and entering a simplified world. I think there are similarities to this in literature, as well. For example, there is a format to mysteries, and there is a sense of relief in being able to begin reading by wondering who the culprit is and having the mystery guide you as you read.

Complexity is tough. It consumes a lot of calories, is wasteful, and uncertain. Even though life in the real world is uncertain, in the simplified world of digital games, you can raise your level and stronger enemies can be defeated. We can be comforted by a world with an achievement curve.

Having said that, currently so-called “open world” games, similar to a real world simulation where you can do whatever you want, are all the rage. It is a somewhat strange phenomenon.

 

Yoshikawa: The pendulum may oscillate between the two modes, both in individuals and in society: the desire to live within a mythical pattern that follows a certain scenario and the desire to explore if they get tired of that.

 

Miyake: Saying “mythical” is incredibly pertinent. People don’t accept the world as it is, so we created some myths and a world with order that does not have to directly take on our complex world. And then we made these myths more and more complex. The myths made the world interpretable. Games have condensed this process and created complex worlds that operate with easy-to-understand rules. In the modern world, digital games are also an alternative to these myths.

 

Yamamoto: I think the phrase “swing like a pendulum” describes it perfectly. This reminded me that I hold lectures on how to create games twice a month. The high school students who attend create game worlds with programs and then dive into those worlds wearing VR headsets. The lecture is held via Zoom, and one time, one student who always participated from his room had a field as his background. He said that he had pitched a tent in the yard just outside his house for the lecture. When I wondered why, he said it was because it was tough for him to always be in an internet or VR world. The worlds that are digitally created may at first glance appear complex, but in the end, there is a limit to their complexity. If you spend time in the world, you begin to see the patterns and structure. So by going outside and gazing at the grass, trees, and sky, he realized that it was so complex that he couldn’t get tired of seeing it.

 

Miyake: That’s amazing.

 

Yamamoto: Yes, that is an insight he gained because he had immersed himself into the digital world to the extent that he could sense a gap between digital structures and nature in this way. I think this is the pendulum Yoshikawa was talking about. At the same time, it is sometimes hard to take in the seemingly limitless complexity of nature. Without both, we probably wouldn’t be able to deal.

 

Yoshikawa: I think both exist in literature, as well.

 

Yamamoto: Yes. There are days when I don’t want to read anything other than a mystery.

 

Yoshikawa: There are also days when I want to read James Joyce.

Literature Helps Us Return to Reality

Yamamoto: It is surely important for people to go back and forth between both.

 

Miyake: Yes. Reading at the end of the day is also a way to shut out the real world for a moment, prepare yourself for a simple world, and return to the complex world once you have grown tired of the simple world. Many novels themselves have the structure of There and Back Again (subtitle to a children’s fantasy novel The Hobbit, a 1937 novel by J. R. R. Tolkien). Novels kindly provide for a natural entry into the story and a natural return back to reality. Perhaps the return to reality has not yet be refined enough for digital games. Because literature is refined, it draws you in to a degree and then returns you to reality. Dostoevsky also shows you the way out despite his incredibly deep worlds.

 

Yamamoto: That’s because it’s risky if you don’t return.

 

Miyake: Yes. A strength of The Lord of the Rings is that it does not end with the defeat of the Dark Lord Sauron. While going home, the hobbits bicker and we part ways with travel companions at the Grey Havens. Because there are scenes that make us sad, there is a good fade out. It’s the same format as There and Back Again.

Perhaps it’s not possible to create paths back to reality without many years as a medium. When watching old movies, many of them end before we are ready, but movies today have a good fade out and are made to gradually bring us back into reality. Digital games still have a short history, so they haven’t prepared a way to return us to reality. There are still games where it ends right after defeating the final boss. VR is even younger, and while it is great at pulling people into the world, it will need to deal with the issue of bringing people back to reality. There is much to learn about returning to reality in games from literature.

 

Yamamoto: This may be one reason behind the problem of addictions.

 

Miyake: I think it will change as time goes on.

I think that digital games are very close to literature. For both, people change within an experience. That is why there is value in the experience of reading or playing, as there is no meaning in just reading the synopsis or details. However, literature reaches a deeper place than games do. The person I am before reading a Dostoevsky novel is different from the person I am after on a deep level. There is nothing else with that kind of effect. Literature does not preach the meaning in suffering, but through the experience of the story, I think that the reader becomes aware of it.

 

Yamamoto: It is a type of simulator. You can experience the internal views or thoughts of someone different from yourself, which is an experience you cannot have in everyday life. The reason why we can simulate life deeper than in other fields is because the expressive channel of words can make things abstract fairly well. Movies are too concrete.

Miyake: It is because it is written in words that a unique, individual experience is drawn out. Even with a simple bad guy you see in a movie, with written words, the reader draws images from their own experiences and the bad guy becomes the bad guy they encountered recently. With literature, images are created from materials in the reader’s memory, so it becomes a story image unique to that person, and the reader is drawn deeply into the story. And through the story, the reader becomes aware of different aspects of even the bad guy and can empathize with his back story. Literature draws out our experiences like command tools. It draws out the reader’s deepest memories and causes a chemical reaction to occur. This reaction comes from the various internal memories and experiences we have, not from the story. I think stories give us this combination. Stories also shift from heating up to cooling down. In other words, there is an exit path from the story prepared within the story. This is attractive.

Life and Intelligence Are Inseparable

Yamamoto: Miyake, you have written books on AI and philosophy, such as Jinkochino no tame no Tetsugaku-juku (Philosophy School for Artificial Intelligence), and have actively tried to incorporate philosophical results into your AI research. Why is that?

 

Miyake: First, I consider AI research to be neither “philosophy” nor “science,” but rather something in between. Every area of study involves deconstructing the world. Sociology, psychology, chemistry, biology… They consist of breaking down problems and researching each one individually.

But AI is an experiment in gathering all of this up again and trying to re-create a world and intelligence. It is not a study that deconstructs, but one that combines. This is different from other fields. To put it simply, there is a table with nothing on it. There, knowledge is brought from various fields, and it answers questions with “we created an intelligence,” “it was no good,” or “oh, we’ve created a world.”

I believe that philosophy plays a role in determining layout. When combining, it is necessary to decide upon the overall layout rather than just simply creating parts. If the layout is for “intelligence” as described by Descartes, then the parts should be arranged this way, and if the layout is for “intelligence” as described by Bergson, then it should be arranged this way.

As it is created from a combination of all studies, I think of AI research as real “anthropology.” To put it simply, there is no technology unique to AI. The kanji conversion system from the 1980s was called “AI,” and neural networks were also “AI.” Now they are called information processing and optimization technologies. Deep learning will surely become a type of information processing technology eventually. Everything said to be AI research technology will be passed on to other fields. And we maintain a hollow structure. We repeat the process of creating and discarding the various things that bubble up from within. I think this exercise itself is interesting and I believe it to be the ultimate form of human exploration.

What is interesting is when creating AI within digital games, something will gush out for a moment. It’s like I’ve gotten close to life. Many times this ends up being a failure, but this trial and error process may eventually lead to the creation of life or intelligence. The scientific and philosophical attitude required for AI research are both nothing more than one aspect of human intelligence, and through my research, I have come to sense that they are essentially the same thing.

 

Yamamoto: Listening to that, I am reminded of Spinoza and Leibniz from the 17th and 18th centuries in Europe, for example, before the spit between philosophy and science as in the modern-day sciences and humanities. At that time, a single person would research nature and numbers, the human mind and the characteristics of language, or society and religion, but gradually these fields split to be limited to specialized fields. Your AI research attempts to gather, combine, or link the various sciences and knowledge that were at one time disassembled like the Tower of Babel. It seems ambitions to see how we can create a better model for this world.

 

Miyake: On top of that, when you consider the relationship between modern AI and philosophy, it is the “Man a Machine” worldview that humankind is confronted with. Since Descartes, or to be precise, since his disciples, there has been a philosophy that understood man as a mechanical thing, and at the same time, an opposing philosophy, and both existed in parallel. However, AI corresponds to the “Man a Machine” philosophy more than the other. That is, if we can create intelligence with a machine, then surely man is also a machine.

There is a fear that someday, AI may scientifically deny the deeper conversations on what life and the soul are. From a philosophical standpoint, the fact that AI can win at go and shogi is not a major story, but I think fear towards AI comes from a sense of danger of not knowing when there will be an attack on something major.

A Desire to Create Alaya-vijnana

Yamamoto: You could say that there is a human identity crisis occurring. What will surely be required is a philosophy that supports human existence, just as existentialism did in the past.

 

Miyake: Yes, exactly. That’s why it’s no surprise that philosophy has come into fashion. Actually, “information-based AI” is now in its heyday, and it is thought that AI processes information captured by sensors. As an extension to this thought, there are some people who will says that humans are just information, made up of DNA. This is exactly the “Man a Machine” idea.

I strongly protest this idea. Information is a shadow of substance, so no matter how much of a shadow you gather, it cannot be substance. Clearly life and intelligence are inseparable. There is no life without intelligence and intelligence without life is impossible. The fundamental power that causes a world to appear cannot be acquired even by a smart AI. We believe that the source of the power that makes this world appear is found in a place where we receive the world as life in a part much deeper than our unconscious. That is, something like “alaya-vijnana” (store consciousness, consciousness forming the base of all human existence) from Buddhism. Humans are rooted to this kind of base layer, and we manipulate language and tools. The current approach of AI research is to try and mimic this surface layer “intelligence.”

So how can we create alaya-vijnana-like layers? People with the “weak AI” believe that such layers are not needed in the first place, but I think it won’t be interesting unless we create all of the layers. With deep learning, intelligence turns into layers and an internal structure appears. I consider this image to be close to the worldview of vijnapti-matrata (theory that all existence is subjective and nothing exists outside of the mind) in Buddhism.

 

Yoshikawa: You might be able to create a nearly human intelligence if you could create an AI with alaya-vijnana as a base layer.

 

Miyake: Yes. The prerequisite for alaya-vijnana as I see it is a body. We know that we understand information with our whole bodies and not just as simple information. We can understand the world as our own experiences because we have a body. Without a body, there is no alaya-vijnana. In other words, you could say that robots today don’t have a body but only sensors.

With just the act of seeing, I think humans see something as an overall experience. You can’t just extract the act of seeing. The body’s complex network is itself the true nature of alaya-vijnana. We interpret the world through our network as living creatures. However, when creating robots, we declare that they can see using camera sensors. It is an elegant idea in engineering. But if we are to create an intelligence that goes beyond convenient robots, we must have a discussion once again. I want to turn this idea upside down, so I use words like “alaya-vijnana” from oriental philosophy. They define problems in the West, but starting from chaos is the basic format for oriental philosophy. The world is inseparably interwoven with the self – this is where this philosophy begins. I think that interesting things can happen when we combine Western and Eastern knowledge when working with AI. To sum up what I’ve been talking about, there are three kinds of depth needed to create “strong AI.” They are the depths of internal structure, embodiment, and world contingencies. If we try to make these deeper, perhaps we will learn that there is a limit to trying to create intelligence with programing alone.

 

Yamamoto: That’s because programing itself is sort of the embodiment of the Western approach, right?

Chaos Itself is the Source of All

Miyake: Recently, there is a method called “reservoir computing” which uses a neural network. It is an image where chaos is created within a tank and you select and pull out the chaos that you want from the tank. It is a boisterous method, but I believe it is close to the ideal state of oriental intelligence. 

 

Yamamoto: I see. So you ultimately create chaos. Interestingly, it is similar to how world creation myths start from chaos. 

 

Miyake: There are various ways to create chaos. Chaos is different from confusion or disorder. I was mistaken on this fact with the source of my confusion in AI research in my early days. There is both shallow and deep chaos. How can we create chaos on a deep level? Expertise is needed for this. That is the kind of scholar I want to become. I want people to say that it is amazing that I can create chaos, even if they don’t know what purpose it serves.

 

Yoshikawa: This makes me think of the origin of life simulation. When we were children, we were told it was a “primordial soup” or some other suitable thing, but we know a lot more about chaos now.

 

Yamamoto: We create chaos with potential, from which a variety of things can arise. It is not just nonsense.

 

Yoshikawa: We have also gotten closer to a mythical story now.

 

Yamamoto: I’m sure people will say, “The chaos this person makes is good!” (laughs)

 

Miyake: But, because the West is superior in the academic world today, you gain prestige by contributing to English-language journals and studying at American universities. It is hard to say anything or be hard outside of that context. But I am in a freer position to say what ought to be said. We need to speak of the importance of the Oriental approach, which is quite separate from the Western approach. I think we humans can arrive at intelligence precisely because we have the East and the West.

 

Yoshikawa: For analysis, the Western approach is fine, but surely the Oriental approach, is necessary when trying to create an AI by yourself like Miyake.

 

Yamamoto: The prerequisites for creating something and analyzing something are different. In addition to the Western approach, we will layer and add different knowledgeable actions.

We were able to hear a great deal from Miyake because he is involved in research and development in a field that is different from the existing academia. Today, we looked at the idea that if you want to create intelligence, then you should create chaos.

 

Yoshikawa: In a sense, it is a reversal of ideas. But when you think about it, it’s the royal road.

 

Miyake: That’s because in the East, it is normal for everything to begin with chaos.

Moderated by Yamamoto Poteto
(Recorded at Bungeishunju on October 8, 2021)

Translated from “Miyake Youichiro ‘AI kenkyu wa Sekai to Chino wo saikochikusuru’ (The future of AI and literature—Interview with Miyake Youichiro: AI research rebuilds the world and intelligence),” Bungakukai, February 2022, pp. 22–37. (Courtesy of Bungeishunju, Ltd.) [July 2022].

Keywords

  • Miyake Youichiro
  • game AI developer
  • Square Enix
  • Yamamoto Takamitsu
  • game designer
  • Tokyo Institute of Technology
  • Yoshikawa Hiromitsu
  • writer
  • AI
  • artificial intelligence
  • digital games
  • VR
  • literature
  • fiction
  • philosophy
  • intelligence
  • chaos
  • strong AI
  • weak AI
  • robots
  • Descartes
  • Man a Machine
  • alaya-vijnana
  • vijnapti-matrata
  • Buddhism
  • character AI
  • spatial AI
  • Asimov
  • Dostoevsky
  • Tolkein
  • Kojima Nobuo