Saturday 27 May 2017

On Designing Relevant AI Products for Humans


Many attempts at creating effective user experiences with artificial intelligence (AI) tend to produce news worthy results but not so much in terms of upping mainstream adoption.

Routine AI use for most consumers is still not an everyday thing (broadly speaking anyway, it may not be so if you look at certain demographic groups and interest groups such as tech enthusiasts). No one’s as dependent on AI assistants in the same way as we are so dependent on, say, Word processors or Google Search or mobile devices.

Intelligent assistants are still a novelty, and that may stem from the problem of their design. Most people don’t want to converse with machines on philosophical matters (though it ought to be cool), they just need to make a reference. People don’t want virtual girls in holographic displays asking them how their day went after coming home from work. And people seem to naturally want to troll chatbots in an attempt to explore the limits of their “intelligence”, and perhaps have a laugh while at it.

In fact, the more we attempt to define AI use in the context of human specific activities, the less it is used. It is perhaps a result of our bias towards emulating the all-knowing, all-powerful and very personal/witty AI found in films like Her or in sci-fi games like Halo. This kind of cultural hinting naturally leads many people to believe that, because the AI is being presented in a very human-like fashion, then it should act like a human. The fact of the matter is that, in almost all cases, you’re probably going to see through the facade before 10mins of continuous use have elapsed.
AI simply isn’t human, and designers shouldn’t help us pretend that they are.
Instead of trying to approach the question of AI usage when designing interfaces purely from the perspective of our sci-fi fantasies (who knows the future anyway?), we could perhaps start with the user; what do they actually need? They probably might not need an AI with a built-in personality.
They might instead need an AI that can do small but useful stuff efficiently and reliably. Successful intelligent agents that do just that usually lack any form of personality or similar bells and whistles. 

Take Google’s strategy; Google’s assistant and intelligent products haven’t a hint of personality besides the voice used for their voice search functionality. This cuts out a lot of cultural and psychological baggage from the conversation. Since the product clearly appears as a machine, people will not lead themselves to believe that their assistants can do human-like stuff (sparing the occasional witty answers and trivia). This can be beneficial because the user isn’t distracted by the product’s attempts at sounding human. The user is instead empowered without him/her even realising it. It just works.

I used to think building an assistant as cold and impersonal as Google Now would be a bad move, but I can now see the logic when comparing Google Now to the competition. Cortana looks so lively, and yet she disappoints me precisely because you keep looking for the human-likeness as it all but disappears with continued usage. Ditto Siri. I get so distracted by their apparent witty personality that I can’t seem to get at their actual functionality. Why do I need a witty robot in the house?

By far, the most interesting thing that Google has ever added to it’s Google Now app was no less impersonal, but it was extremely useful IMO. Now on Tap is able to recognise elements displayed at anytime on a phone, allowing a search to be done without leaving any given opened app. If it doesn’t detect what you want, you can simply highlight it and it would search for that. It is a perfect design; minimalistic, useful and brutally impersonal. It just works.

Intelligent apps shouldn’t be built to seem explicitly human; they should be built to get some actual work done. And they should do it without distracting. I know this might not be the kind of interesting personal robot some of us would have imagined, but it is probably the only way to make things less awkward between man and machine. The uncanny valley is not a place that you’d like your app to end up in, certainly not the rest of the consumer AI market and industry.

AI is still an emerging technology, like the Internet before it went mainstream. We are still trying to understand how to integrate the technology into the normal, everyday workflow of average human beings. Human-AI UX design choices may prove to be what makes or breaks consumer AI applications. The potential rewards for successfully breaking the design problem are tantalising; designers must keep seeking the perfect problem-solution fits that real people might care about and design around that.

Tuesday 16 May 2017

Extending Our Intelligence: The Coming Era of Binary Sapience

Our disembodied extensions of our brains are set to become smarter and more closely linked with us

Our phones and other computing devices are more or less passive creations, bound at every step to man's programming. Now, however, they are becoming more and more responsive to us. More anticipatory towards our various needs, subtle tastes and wants; they are becoming more intelligent.
What does it mean for humanity exactly? How far can we take this?

If an intelligent algorithm can read and classify your current mood, and map your different mood swings over the course of a day through different data inputs with a higher accuracy than your spouse, could it be used to subtly use that data to make your mood better? Is such a state of affairs desirable, assuming it is possible given current technology. And what of that data anyway?

A broad personal artificial intelligence (AI) suite, perhaps more powerful and capable than anything our current best could one day be set lose on every individual on Earth to map out what makes them tick exactly (the data profile on each individual could be worth quite a lot of money to say the least), and then actively engage with them to achieve a desired individual state for each person as set by themselves (or others?). Such a machine could become the sort of ancillary intelligence we see in many works of science fiction; a true intelligence that will allow each person to understand themselves COMPLETELY, and then proceed in assisting us to make ourselves better than we can possibly ever be on our own, or even with the help of HUMAN professionals.

An individual human will cease being a single intelligence, but will instead become a binary intelligence, with the personal machine significantly expanding our cognition and identity at an almost subconscious level, while itself advancing itself via software upgrades and learning.

We can imagine a remote future where a newly born child is assigned such a personal intelligence the moment he or she is born. The intelligence will observe his development and consult the corpus of our knowledge to extrapolate the future possibilities of this child's personality and behaviour, to better understand how to manage them as he/she grows up.

Meanwhile, the child's parent's own intelligent aides can be synchronised with their child's, to enhance not just their parenting experience, but to enhance the way the literally think about their role as parents. We can keep going; we can even imagine training AI to school children (a personal tutor from the future). Given the knowledge these AIs will have access to, it is reasonable to assume that they will probably evolve to become the best tutors, or at the very least, tutor-student aids possible.

Why should any of this be desirable? Why would anyone want to create machines that not only replace humans in a variety of high level skilled tasks, but also become a very part of our very being. And we haven't gone into the ethics of the data used produced in the meantime. But make no mistake, we're heading in that general direction very fast. And the reason is painfully obvious.

While human beings are versatile in terms of initiative and creativity, we don't scale very well in the realm of perfect recall, infinite patience, logic, reasoning, et cetera. Obviously we will need tools to give ourselves these superhuman abilities. And of course, sometimes we might need something to light our way forwards, like when a writer has writer's block and needs a dose of much needed inspiration from his personal intelligence aid. The more these tools actively interact with us, the better we'll become at handling not just the modern world, but ourselves as well.

However, we shouldn't kid ourselves. Advancement comes with risks. The Wanna Cry worm that rocked the world over the weekend demonstrates not the fragility of our systems, but the fragility of our ability to get a grip on our systems. Our minds are the weak point (which the hacker knows and exploits), and at the same time our strongest point. We evolve through experience, and in this century, the sum total of our experiential knowledge is more accessible and more voluminous in scope and depth than ever before. It would be folly to not add on an extra layer of thinking cortex, our exocortex in the form of actively engaged, thinking machines to ourselves in order to make efficient use of our species' knowledge vault.

Monday 15 May 2017

The Source of Innovation: The Open Commons Alternative


A Public Library; important venture but not a lucrative venture
Last week’s best read on the web from the contrary-to-my-personal/popular-beliefs section was, in my opinion, this.

The author has some interesting insights into the sources of America’s innovation, how said innovation comes mostly from government funding and, most importantly, how the private sector tends to NOT produce innovation, or even downright discourages it.

If you’ve been to a fair number of entrepreneurship and startup events, you’ll probably have heard the oft spoken claim that the private sector is the source of innovation. However, the guardian article I argues to the contrary; government is the source of all major innovations that exist today, and it is most certainly true. Nothing can outcompete government spending on risky ventures, especially private capital.

Unlike private capital, the spenders of public funds usually tend to invest in long term projects that are either too expensive to manage with private capital, will take too long to build with little in terms of direct monetary returns, are not prone to scarcity (despite being desirable) and therefore not profitable (for example national infrastructure for utilities such as water and sewerage). These kinds of ventures are extremely risky but important nonetheless. Just not important enough in the context of capitalism.

Having a system that can easily handle this kind of spending without being held accountable by the principles of capitalism (though obviously this does not mean such a system is exempt from the principles of sustainability and accountability) is what leads to the concept of public spending on public goods (this also includes military spending for the public good that is national defence, a necessary evil unfortunately).

However, having said all this, I’d also add that, although this kind of system of spending is important in the creation of public goods, it would be perhaps myopic to accept the that the MODERN STATE ARCHITECTURE is the only way logical to guarantee and execute public goods creation. There are other ways of deriving the same via different means that are, arguably, more open and accountable than classic states (or supranational entities for that matter).

Wikipedia comes to mind here: it is neither a government nor a private entity. It is a commons: a resource that is owned by the community. It’s resource is knowledge, freely created, curated and edited by ordinary people for anyone who can access it. Its funding is derived from generous donations (like taxes, but without the legal coercion). And, most importantly, it works without being directly dependent on government, or private capital. It is the middle ground of innovative possibilities that we all want.

Why do I fuss over this? Simple, I want my innovation to be guilt-free, clean, without any stain that taints government-sourced innovation (military R&D for war, sometimes without accountability) and private capital (questionable intents, inaccessibility through artificial scarcity and over pricing). 

Innovation that is purely and directly owned by the community is there in the wild. In fact, due to the fact that all innovations can ultimately be traced to the individual minds of ordinary but brilliant folks, we can at least agree to accept that the need for a different approach that doesn’t involve introducing potentially perverse incentives (government or private in nature) to these people is in order.

A collective (commons-based) investment scheme for open innovation might be the third alternative that we desperately need. The public spending versus private capital discourse should not blind us to this if we really want the march of innovation to continue to thrive and sustain itself. There is also space to talk about expanding this approach to create a directly ruled, commons-based system of government, but that’s probably a topic for best suited for another day.

Saturday 13 May 2017

To Share or Not to Share: A Story of the Web, Fake News & Africa

“To remain the web’s weavers and not its ensnared victims, we must merge with our electronic ‘exocortex’, wiring greater memory, thought processing and communication abilities directly into our brains.”Hughes, James (18 November 2006). “What comes after Homo sapiens?”
Today while I searching for a term on my Google app, I noticed an extraordinary news item prominently displayed under the search bar. Its headline reeked of the usual sensationalism you’d expect from modern media outlets and, had it not have anything to do with a fellow East African Nation, I probably would have ignored it.
The Piece of Fake News link I Encountered. Nothing screams for attention more than “Breaking!
But it was too distracting and important a headline to ignore. A quick search helped confirm suspicion; this was most probably fake news being peddled by an unscrupulous website that doesn’t mind publishing pure junk for the ad revenue.

If you’re not familiar with the subject of the headline, here’s the short version: South Sudan, the world’s newest, independent country, has a president from the ruling Sudan People’s Liberation Movement (SPLM) party called Salva Kiir. He’s currently locked in a bitter rivalry with his own vice president named Machar. This internal strife has cost the country dearly ever since.

Regardless of how you feel about all this, news of the president’s death, given the situation and the nature of ethnic-based conflicts, would be a big blow to the country, nullifying any remote chance of peace for the foreseeable future. People sharing this online can potentially do irreversible damage to the country’s already delicate situation (perhaps it is of small comfort that most of the country’s impoverished citizens might not actually see this news piece).

Unless we do things differently, like what I did immediately after discovering that this was fake news.
To Fake to Share. I hope Big G still listens to feedback
To be honest, this does not change my personal stance on the general subject of fake news; fake news is a part of humanity and that will never change. It exists on all sides of the spectrum. What can change is how each and every one of us RESPOND to any news item we encounter wherever we are on the web. We should not get so easily caught up in our own web. If we are going to prosper in our web of information, WE THE USERS must learn to navigate it well. We must invest ourselves in keeping our web clean and healthy before we start asking corporations and institutions to keep it clean for us.

The tools already exist for this to happen. All we need now is to inculcate in ourselves the mental discipline needed to use these tools. If every human knew how to identify, verify and filter fake news from real news with an accuracy of 100% (assuming that is even theoretically possible given the constraints of the human brain), we’d never have to worry about a clickbait news sparking a civil war. And it can all start with a simple act, like not sharing a faulty news piece.