(This article was originally posted on out-of-office.)
The recent Wired interview between Joi Ito, MIT Media Lab head, Scott Dadich, chief editor of Wired and the US President Barack Obama on artificial intelligence brought many perspectives into one room. The discussion is a great read as it covers the morally ambiguous ground that we might need AI to inhabit when we put them into autodrives, the role of government in funding and checking AI research, and … Star Trek. As a techie (and trekie), it is hard for me to resist the temptation of having a general AI at my disposal. However, what would the big picture be like? Would we be much better off with general AI all around us? Would AIs end up taking over the world, as is usually painted in dystopian science fiction, leaving us to fight to survive … maybe? Would I want to be in a world with general AIs all around, or would I find that world wanting?
When we begin working on a problem, one of our main tasks is to build a theory about the problem space so that we can capture and communicate our understanding about it to others and to machines. Given that when we begin building these theories we might know only a few parts of the elephant and not the whole elephant itself, what chance do we stand to discover the whole elephant if our starting point is a few limited perspectives? In this post, I share an example of how to arrive at higher level theories about a domain via bottom up exploration using systematic beta abstraction.
In the movie “The Martian”, Matt Damon plays astronaut Mark Watney who gets stranded on Mars and makes history by practicing open defecation and growing potatoes using his own shit as manure on Martian soil.
ReactJS is gradually moving towards a purely functional style of specifying
application views. In recent versions, you’re encouraged to use a pure
functional syntax that maps a view’s props to the appropriate virtual-dom
components. Combined with Flux, this approach is getting closer to what is
already established in the Elm world, where it is referred to as The Elm
Architecture. This is particularly visible in the increasing
emphasis on working exclusively with
props and avoiding
rendering React components, and piping all inputs and events into the
When Don Norman and Bruce Tognazzini write that Apple is giving design a bad name, you sit up and listen. They write that Apple has thrown away well established design principles and gone for the pretty and snazzy instead.
Recently, I had to work on an animated view for an iOS app. I built the view
using explicit layer-based animations (
CABAsicAnimation and brethren) in a
separate app and it worked fine. Then I moved the view into the host
application and all hell broke loose. After much fighting with the API,
I finally arrived at the techniques needed to ensure that the animations
work as intended irrespective of context. This post collects these notes as
a list of recommendations to follow.
Yesterday, I drove an automatic for the first time. When driving a manual shifter, my brain is on auto pilot - I’m seldom aware of my gear shifts and footwork. When I drove the automatic, all of this suddenly needed to be consciously done. So there was a bit of struggle I experienced with a supposedly simpler system. Bleh! The automatic drive is not intuitive.
Now, wait a minute. We know that folks who shift (ahem!) from automatic to manual face a harder struggle. In the software world, there is a similar hurdle faced by folks shifting between operating systems, and yet we see wars of the kind “my OS is more intuitive than yours”. What do people mean when they say something like that?
These are elaborate notes on a “Tech Tonic” talk given at Pramati Technologies, Chennai, on 23rd July 2015. The organization of this post reflects the talk itself.