AI ... YO!

Nov 1, 2016   #AI  #Em  #Friendly AI  #Neural Style  #out-of-office 

(This article was originally posted on out-of-office.)

The recent Wired interview between Joi Ito, MIT Media Lab head, Scott Dadich, chief editor of Wired and the US President Barack Obama on artificial intelligence brought many perspectives into one room. The discussion is a great read as it covers the morally ambiguous ground that we might need AI to inhabit when we put them into autodrives, the role of government in funding and checking AI research, and … Star Trek. As a techie (and trekie), it is hard for me to resist the temptation of having a general AI at my disposal. However, what would the big picture be like? Would we be much better off with general AI all around us? Would AIs end up taking over the world, as is usually painted in dystopian science fiction, leaving us to fight to survive … maybe? Would I want to be in a world with general AIs all around, or would I find that world wanting?

An observation often put forth is that societies that have adapted to new technology have ended up dominating the planet’s culture over time. Guns, germs, steel and Facebook. If we simply accept that as truth, then we leave ourselves no choice. Should we just jump into any tech billed as the next great change in society without questioning it because … we have no choice anyway? Does a person exist if they haven’t logged into Facebook in years? Should we not question mechanisms of mass surveillance even if they are supposed to give us benefits?

While the networking and surveillance is the start of the story, we cannot claim to understand the consequences of plugging AIs into this scenario from day one. The values that we want these AIs to reflect are important input from us that need to be taken into account before we take that step. It can be alarming to see cherished values such as privacy being violated on the net, but only by skirting these boundaries can we discover how much we as a society truly value what we claim to. The problem though is that powerful dynamics come into play when we put networking and AI together. Such experimental skirting can result in irreversible phase changes … and we don’t have a great history of understanding phase changes very well, let alone acting on it to undo it (hint: climate change).

So … some great minds of our times like Stephen Hawking and Elon Musk are advocating a cautious stance in their Open Letter on AI and encourage simultaneous research into ensuring that such AI is beneficial to society. Well before them, I think, Eliezer Yudkowsky articulated the same problem, calling it Friendly AI, and has been diving into the details of the decision theory involved in making one. Much of the Less Wrong community, one that Yudkowsky started, is also familiar with the topic. The problem isn’t a simple one if you consider the fact that a capable AI can modify itself by examining and rewriting its own “source code”. A closely related and interesting hypothesis to come out of Yudkowsky’s think is the AI-Box Experiment, a version of which has been turned into a movie.1 The hypothesis is that a sufficiently powerful but confined AI can always convince its human makers to let it out of its “box”. The confinement is not necessarily a physical box but could be in the form of access to the internet, or access to its own source code, for instance. If true and there is no way to limit the capabilities of such an AI once it comes into existence, we do need to take the Friendly AI concept really seriously.

While this seems too far out into the future for us to be concerned with right now, it is closer in many ways as well and it is skirting with aspects we might traditionally hold as dear to humanity. The work on using deep convolutional neural networks to do style transfer from paintings to photographs are of high enough quality to make us think that painters who’re paid to mimic the styles of the masters may be out of work soon. The value of artistry and the years of work artists put into training is being called into question. Each instance of the styled text that I made recently took 10 mins to produce on a machine and I personally did almost nothing but watch the logs fly by. What will happen to artistry as this tech is now within reach of the masses? What artistic values do we want to preserve and which ones do we allow into obsolescence?

Stylized text "fontli" ... by neural-style

My current thinking is that artistry will not be obsolete, but it will keep shifting its emphasis, meta-ness and who gets to be the artist … at least until human artists do not treat their machine counterparts as part of the dialoguing community. The tool maker’s quest to enhance human capabilities has, thus far, led to newer artistic territory. The question is, will it continue to do so with advanced AI as the ultimate tool? With the text styling experiment, for instance, we do need to choose the styles to mix and we may find some results tasteful and others less so. So far, we’ve always had space for this “taste”. Even with basic non-AI technology such as muvee (for making automatic music videos) we see that while some of the aesthetic chores of video editing are automated, a higher aesthetic is still needed to put together an appealing music video. This “higher aesthetic” is, however, within reach of the unskilled in the video arts. That, is a fun world that I very much want to live in! Designing that world, I think, will be a challenge that requires both insight into technology as well as insight into the human condition … because fun and effort go hand in hand.

While it might seem relief-worthy that we’re retaining “taste”, it too is vulnerable when it gets expressed and captured as data. As the folks at EyeEm have shown, you can train networks to capture such “good taste” too. The EyeEm service uses such a trained network to rate submissions within its community of photographers and this network has been taught by the expressed taste of the EyeEm community. While all these AIs have thus far only been feeding on the data we’ve been producing, the question of when they will begin participating as equal contributors lingers nearby. What then is the next meta-level for artists? Would it be to play AIs like a sophisticated musical instrument?

Once we introduce AI into the community process that art is, the story could take a very different turn. If there are no limits to how AIs may engage with us, we might find ourselves being commoditized – human poetry and artistry and all. With AI, what we’re doing is taking the essence of what we know as “being” and increasing its resource efficiency, speed and replicability. The economics of such a world would indeed be very strange and interesting to explore. It is this curiosity that’s now driving me to read Robin Hanson’s The Age of Em.

Robin Hanson's “The Age of Em” explores the socio-economic consequences of technology that can capture and replicate human level intelligence.

Hanson, a physicist, philosopher and a tenured economics professor, has been a long time favourite of the Less Wrong community. The book uses our current understanding of social dynamics to paint a picture of how “weird” we might find a world with replicable human level intelligences – “em” – to be relative to ours; just as our ancients would definitely find ours to be relative to theirs.

Meanwhile, the prime purpose of all this verbage is to urge us geeks to think deeply about the human side of technology as we engage in our work.

Line of code by line of code, we’re crafting our future. Here and now.


  1. I won’t reveal which one, ‘cos that’ll be a spoiler for those who haven’t seen it yet. ↩︎