• Algorithms are taking over the world. Not yet completely and not yet definitely, but they are well on their way to automate a lot of tasks and jobs. This algorithmization offers many benefits for organizations and consumers; boring tasks can be outsourced to an algorithm that is exceptionally well at a very dull task, much better than humans could ever become. More complicated tasks can benefit from insights derived from analyzing multiple data sources and these insights can help humans in the task at hand. Soon, however, also these tasks could be taken over by algorithms.

    We know many examples of the first, ranging from robots that build your smartphone to algorithms that find that particular website within milliseconds. More and more we also see great examples of the latter. There is an algorithm that has a seat at the board of directors of Hong Kong venture capital firm Deep Knowledge Ventures. In addition, there ar algorithms that can instantly translate spoken language into a different language.

    Algorithms are therefore rapidly changing how we do business. Businesses consist of value propositions, customer segments, consumer relationships, channels, revenue streams, cost structures, limited resources, partnerships and activities. Algorithms enable each of these elements to be automated and using in-depth analytics they can become highly sophisticated. Algorithms are the intellect, the interoperability, the connection and the exchange between consumers, things, processes and information that define the value of businesses.

    Algorithms offer enormous potential for organizations, but we have to ensure that they work as we want them to work. We have to ensure that they don’t go awry and we have to ensure that they don’t harm those that created them; us humans. This might seem farfetched, but a poorly designed algorithm that is tasked with protecting earth could very well see the solution in doing so is eliminating all humans because humans have a negative effect on earth.

    There have already been quite a lot of examples of algorithms that went awry. What about the Facebook algorithm that reconnected users to their exes a few years ago, something they really didn’t want to? Or the book “The Making of a Fly”, which thanks to competing pricing algorithms all of a sudden cost a staggering $23.6 million? Or the flash-crash of the Dow Jones in 2010, caused by algorithms going wild and costing some companies millions of dollars? However, the most well-known example is Microsoft’s tweetbot Tay that recently turned into a racist within some mere hours after being published, because Twitter users taught it racism. These are just some examples of algorithms gone wild, and there are many more out there, but it shows that we have to ensure that algorithms adhere to human norms and values. In other words, we have to humanize our algorithms.

    Humans have flaws and consequently, algorithms will have flaws, so how do we humanize algorithms? One of the lessons we should take away from the Microsoft Tweetbot disaster is that we have to instill algorithms with our norms and values. We should incorporate algorithms with the norms and values that we think are important and we should limit algorithms to go beyond these norms and values. This sounds easier as it is; how do you capture norms and values in mathematical equations?

    In order to incorporate norms and values in algorithms, we should start with telling what algorithms are not allowed to do; almost similar to the ’10 Commandments’. Thou shall not kill someone, A.I.! in the example of the fictional algorithm protecting earth, we should prohibit the algorithm from eliminating all humans. Sounds obvious, but the challenge is that there are many restrictions we would have to incorporate in algorithms, similar like the many laws and regulations we have. Even more, we would have to incorporate these in all algorithms that could, for some reason, go awry, even in unexpected ways. If even companies like Microsoft can make such mistakes, it may be obvious that this is a lot more difficult than it sounds.

    Humanizing algorithms; a vital aspect, I believe, if we want to truly benefit from algorithms and artificial intelligence in the long run. Based on the many examples that our out there, where A.I. and algorithms went wild, as well as the many experts, world-renowned entrepreneurs and academics who warn us from evil artificial intelligence that could mean the end of humanity, any developer out there should therefore incorporate this in their designs. What do you think about humanizing algorithms? How do you think we could prevent AI from going crazy? Please share your thoughts in the comments.

  • 1 comment

    Hey Mark, An idea whose time has come. Yes, it's going to be more difficult than it sounds. One of the aspects to factor into humanizing is freedom. Despite rules, sometimes the human being- depending on intention or circumstances - applies freedom to break or twist the rule in order to achieve desired outcomes. While this sounds like a logical process that could be hardwired, sometimes judgement is made on subjective grounds. AI is a tool in the hands of the human being. It cannot fully replace the human being. The human being decides when to deploy the tool, and when to hold it back. I'll give you the example of an Expert Adviser (EA) used to automate currency trading. When you deploy an EA, is it better that you let it run unattended, or that you deploy it only when you decide to? If there's an announcement for example, that disrupts the market, perhaps against your position, what should the EA do? I'd say the critical thing is to program it to identify the conditions when it should stop running, and either alert a human for a subjective decision to be made objectively, or wait for a signal that adverse conditions have lapsed. I hope this makes sense. Best wishes and good luck, Elizabeth

    Reply