Inorabo's "Life that might have been possible" ALIFE research turns the city into a living thing

Opera command by Android

――The Fujiki pays attention to ALIFE, Alife Lab.Please tell me the reason for starting a joint research.

Fujiki Inorabo's mission is to draw a new future using advanced technology, but I feel that people have been avoided for those that have been published in front of them.That's it.

For example, Steve Jobs said, "This is wonderful. It's a new invention since the PC came out."The project to make it a convenient city is rejected against residents.

As a tool, you are scared even if you push it rationally, conveniently, and efficient.If so, instead of technology, it is necessary to have a mechanism that will be accepted by society.

At that time, I knew Ikegami's project called "Android commands opera".

Looking at the fact that artificial life is enacting songs, the "world that is established beyond human understanding" is the necessary way of thinking and values when drawing an alternative future, and new technology.I felt that I would be born, so I immediately contacted Mr. Ikegami and introduced Aoki.It is the end of 2018.

Alter

Regenerate nature with a rich sound environment

――After, Alife Lab.And what kind of joint research is innolab?Please tell us about your future schedule.

イノラボ 「ありえたかもしれない生命」を探るALife研究が、都市を生き物に変える

Aoki R & D includes a sound scape generator (ANH) based on the sound niche hypothesis.

This begins with the unique development of "SF MANGA DESIGN RESEARCH", which can be thought of an ethnographic science fiction manga and the future that you can think of from it, to explore research themes.In addition to Inorabo, various people, including science fiction writer Tetsu Ogawa, artist Ai Hasegawa, and AI expert, Yoichiro Miyake.

As a result, we talked about the theme of "How to intervene in the group with an artificial system and cause a change?"I did it.When you enter nature, there is a rich sound in all bands, but that creatures were able to avoid other sounds, so that such a rich sound was completed.It is considered.

By actually creating it in the city, people will feel comfortable, and as a result, many people will gather.In the process of proceeding with the discussion, the concept has been solidified not only for humans but also in the direction of restoring nature.

There is a research on a nature saying, "On the site of the dead coral reef, the creature will return by playing the sound of the coral reefs when the coral reefs were alive."I wonder if I can regain the lost ecosystem.I came to the idea that this could create a relationship between new cities and nature.

For that purpose, it is necessary to have a sound scape and a system that can create a sound environment in real time.

――I saw a video on display the system. Did you demonstrate somewhere?

Fujiki "MUTEK.I exhibited ANH's prototype at an electronic music and digital art event called JP 2019.

Originally, cities have been created mainly to the human beings so that foreign enemies cannot invade, but in recent years there are issues such as SDGs (sustainable development goals) (system like ANH).By operating), it would be possible to focus on the entire global environment.Currently, we are continuing research to measure the environment itself.

Through this R & D, I realized again that I had never tried to see the natural and urban environment from a "sound perspective".

Soundscape generator based on sound niche hypotheses (ANH)

What is interesting to do research and development with Aoki Alife Researchers is that they value the surrounding and back side that people can consciously recognize.Alifers say, "There should be information that the entire environment, including those things, will come up with consciousness."I think it's a very interesting perspective to mention unconscious information.

In terms of Masumori system, we do not put much human -centered evaluation axis.

As a result of communicating ALIFE agents to make it easier to live, soundscape is born.Despite the restrictions on hardware such as audio interfaces, the agents can freely put out the sounds of the sound within that range.It's not a better optimization for humans, so it's in contrast to the AI approach.