Collecting a bag of what I think are good ideas in the history of programming, which lead to better thinking about system design, implementation and evolution.
Note: This post will continously be edited to include ideas as they come to my mind as worthy of being included here. Also, the ideas are in no particular order and this is just a brain dump.
(Cross posted from Imaginea Labs).
The distributed ledger protocol used by blockchains has resulted in systems where we do not have to place trust in particular parties involved in maintaining these ledgers. Moreover the ledgers are programmable with “smart contracts” - transactors whose state changes are recorded and validated on the blockchain. A collection of smart contracts describes a system that is expected to ensure certain invariants relevant to the domain are upheld. For example, an election system is expected to maintain voter confidentiality. While the code that describes these “smart contracts” is open for anyone to read, those who’re participating in the systems run by these smart contracts are not in general competent to evaluate them, with the OSS community being the sole eyes on the contracts being deployed. In this post, I examine how these smart contracts can provide “warranties” that are easier to ratify and describe clear and automatic consequences of violating the warranted properties.
Blockchain tech, especially smart contracts, are the hot new “internet”. Post the creation of Bitcoin, we’ve seen the rise of the public smart contract system Ethereum and several private systems like Linux Foundation’s Hyperledger. These distributed ledgers have become the brand new foundation to build apps on. This is as app developers hope to leverage the additional trust that these ledgers are supposed to provide by virtue of their distributed nature.
(Cross posted here from blog.imaginea.com - Blockchain apps must be closed systems)
Can we have a signin/signup flow that is email-based and passwordless similar to a “forgot password” flow but where the URL will work only for the initiator, and only once per signin? This is the scheme I’ve implemented on Patantara and I describe its innards here.
Recently, many techies have spilt words against doing the Google interview process. Broadly they feel their real and demonstrated abilities are not being valued. The most famous of these cases is Max Howell - the developer of homebrew - being rejected in the interview. Following Google, Amazon and the like, much smaller companies have also begun to subject interview candidates to such “problem solving exercises” - either on a whiteboard or within test environments such as HackerRank where you can be rewarded for coming up with the wrong answer quickly instead of the right answer slowly. These same candidates would speak up against these companies as well, had they interviewed there. Is there a real problem with this interviewing technique or are these candidates crying sour grapes?
Traffic in any major Indian city can seem crazy to an outsider. Crazy, scary, impossible, noisy, unruly, chaotic, .. and you can keep on rattling adjectives without ever ending up in a jam. Of these cities, Chennai is perhaps the craziest.
(This article was originally posted on out-of-office.)
The recent Wired interview between Joi Ito, MIT Media Lab head, Scott Dadich, chief editor of Wired and the US President Barack Obama on artificial intelligence brought many perspectives into one room. The discussion is a great read as it covers the morally ambiguous ground that we might need AI to inhabit when we put them into autodrives, the role of government in funding and checking AI research, and … Star Trek. As a techie (and trekie), it is hard for me to resist the temptation of having a general AI at my disposal. However, what would the big picture be like? Would we be much better off with general AI all around us? Would AIs end up taking over the world, as is usually painted in dystopian science fiction, leaving us to fight to survive … maybe? Would I want to be in a world with general AIs all around, or would I find that world wanting?
When we begin working on a problem, one of our main tasks is to build a theory about the problem space so that we can capture and communicate our understanding about it to others and to machines. Given that when we begin building these theories we might know only a few parts of the elephant and not the whole elephant itself, what chance do we stand to discover the whole elephant if our starting point is a few limited perspectives? In this post, I share an example of how to arrive at higher level theories about a domain via bottom up exploration using systematic beta abstraction.