Showing posts with label A.I.. Show all posts
Showing posts with label A.I.. Show all posts

Saturday, 27 May 2017

On Designing Relevant AI Products for Humans


Many attempts at creating effective user experiences with artificial intelligence (AI) tend to produce news worthy results but not so much in terms of upping mainstream adoption.

Routine AI use for most consumers is still not an everyday thing (broadly speaking anyway, it may not be so if you look at certain demographic groups and interest groups such as tech enthusiasts). No one’s as dependent on AI assistants in the same way as we are so dependent on, say, Word processors or Google Search or mobile devices.

Intelligent assistants are still a novelty, and that may stem from the problem of their design. Most people don’t want to converse with machines on philosophical matters (though it ought to be cool), they just need to make a reference. People don’t want virtual girls in holographic displays asking them how their day went after coming home from work. And people seem to naturally want to troll chatbots in an attempt to explore the limits of their “intelligence”, and perhaps have a laugh while at it.

In fact, the more we attempt to define AI use in the context of human specific activities, the less it is used. It is perhaps a result of our bias towards emulating the all-knowing, all-powerful and very personal/witty AI found in films like Her or in sci-fi games like Halo. This kind of cultural hinting naturally leads many people to believe that, because the AI is being presented in a very human-like fashion, then it should act like a human. The fact of the matter is that, in almost all cases, you’re probably going to see through the facade before 10mins of continuous use have elapsed.
AI simply isn’t human, and designers shouldn’t help us pretend that they are.
Instead of trying to approach the question of AI usage when designing interfaces purely from the perspective of our sci-fi fantasies (who knows the future anyway?), we could perhaps start with the user; what do they actually need? They probably might not need an AI with a built-in personality.
They might instead need an AI that can do small but useful stuff efficiently and reliably. Successful intelligent agents that do just that usually lack any form of personality or similar bells and whistles. 

Take Google’s strategy; Google’s assistant and intelligent products haven’t a hint of personality besides the voice used for their voice search functionality. This cuts out a lot of cultural and psychological baggage from the conversation. Since the product clearly appears as a machine, people will not lead themselves to believe that their assistants can do human-like stuff (sparing the occasional witty answers and trivia). This can be beneficial because the user isn’t distracted by the product’s attempts at sounding human. The user is instead empowered without him/her even realising it. It just works.

I used to think building an assistant as cold and impersonal as Google Now would be a bad move, but I can now see the logic when comparing Google Now to the competition. Cortana looks so lively, and yet she disappoints me precisely because you keep looking for the human-likeness as it all but disappears with continued usage. Ditto Siri. I get so distracted by their apparent witty personality that I can’t seem to get at their actual functionality. Why do I need a witty robot in the house?

By far, the most interesting thing that Google has ever added to it’s Google Now app was no less impersonal, but it was extremely useful IMO. Now on Tap is able to recognise elements displayed at anytime on a phone, allowing a search to be done without leaving any given opened app. If it doesn’t detect what you want, you can simply highlight it and it would search for that. It is a perfect design; minimalistic, useful and brutally impersonal. It just works.

Intelligent apps shouldn’t be built to seem explicitly human; they should be built to get some actual work done. And they should do it without distracting. I know this might not be the kind of interesting personal robot some of us would have imagined, but it is probably the only way to make things less awkward between man and machine. The uncanny valley is not a place that you’d like your app to end up in, certainly not the rest of the consumer AI market and industry.

AI is still an emerging technology, like the Internet before it went mainstream. We are still trying to understand how to integrate the technology into the normal, everyday workflow of average human beings. Human-AI UX design choices may prove to be what makes or breaks consumer AI applications. The potential rewards for successfully breaking the design problem are tantalising; designers must keep seeking the perfect problem-solution fits that real people might care about and design around that.

Tuesday, 16 May 2017

Extending Our Intelligence: The Coming Era of Binary Sapience

Our disembodied extensions of our brains are set to become smarter and more closely linked with us

Our phones and other computing devices are more or less passive creations, bound at every step to man's programming. Now, however, they are becoming more and more responsive to us. More anticipatory towards our various needs, subtle tastes and wants; they are becoming more intelligent.
What does it mean for humanity exactly? How far can we take this?

If an intelligent algorithm can read and classify your current mood, and map your different mood swings over the course of a day through different data inputs with a higher accuracy than your spouse, could it be used to subtly use that data to make your mood better? Is such a state of affairs desirable, assuming it is possible given current technology. And what of that data anyway?

A broad personal artificial intelligence (AI) suite, perhaps more powerful and capable than anything our current best could one day be set lose on every individual on Earth to map out what makes them tick exactly (the data profile on each individual could be worth quite a lot of money to say the least), and then actively engage with them to achieve a desired individual state for each person as set by themselves (or others?). Such a machine could become the sort of ancillary intelligence we see in many works of science fiction; a true intelligence that will allow each person to understand themselves COMPLETELY, and then proceed in assisting us to make ourselves better than we can possibly ever be on our own, or even with the help of HUMAN professionals.

An individual human will cease being a single intelligence, but will instead become a binary intelligence, with the personal machine significantly expanding our cognition and identity at an almost subconscious level, while itself advancing itself via software upgrades and learning.

We can imagine a remote future where a newly born child is assigned such a personal intelligence the moment he or she is born. The intelligence will observe his development and consult the corpus of our knowledge to extrapolate the future possibilities of this child's personality and behaviour, to better understand how to manage them as he/she grows up.

Meanwhile, the child's parent's own intelligent aides can be synchronised with their child's, to enhance not just their parenting experience, but to enhance the way the literally think about their role as parents. We can keep going; we can even imagine training AI to school children (a personal tutor from the future). Given the knowledge these AIs will have access to, it is reasonable to assume that they will probably evolve to become the best tutors, or at the very least, tutor-student aids possible.

Why should any of this be desirable? Why would anyone want to create machines that not only replace humans in a variety of high level skilled tasks, but also become a very part of our very being. And we haven't gone into the ethics of the data used produced in the meantime. But make no mistake, we're heading in that general direction very fast. And the reason is painfully obvious.

While human beings are versatile in terms of initiative and creativity, we don't scale very well in the realm of perfect recall, infinite patience, logic, reasoning, et cetera. Obviously we will need tools to give ourselves these superhuman abilities. And of course, sometimes we might need something to light our way forwards, like when a writer has writer's block and needs a dose of much needed inspiration from his personal intelligence aid. The more these tools actively interact with us, the better we'll become at handling not just the modern world, but ourselves as well.

However, we shouldn't kid ourselves. Advancement comes with risks. The Wanna Cry worm that rocked the world over the weekend demonstrates not the fragility of our systems, but the fragility of our ability to get a grip on our systems. Our minds are the weak point (which the hacker knows and exploits), and at the same time our strongest point. We evolve through experience, and in this century, the sum total of our experiential knowledge is more accessible and more voluminous in scope and depth than ever before. It would be folly to not add on an extra layer of thinking cortex, our exocortex in the form of actively engaged, thinking machines to ourselves in order to make efficient use of our species' knowledge vault.

Monday, 28 October 2013

Weekend Review: Machine Learning and its Possibilities

Because I have now started clinical rotations for this academic year, the number of posts I can write will consequently be curtailed and even haphazard in frequency. I thank my readers for their support in making this project worth the effort. Please continue to visit and you can keep track by bookmarking or using your favourite RSS application.

While going through my weekend net readings, I was simply delighted by this very engaging article from the Verge that features the cofounder of Microsoft, Paul Allen's thoughts on machine learning and its potential for impacting our lives. But to really truly understand the nature of what exactly we're dealing with here, I wish to touch on a couple of other articles that serve to augment Paul Allen's thoughts.

 Machine learning has much to do with the famous English mathematician, Alan Turing. His 1950 seminal paper titled 'Computing Machinery and Intelligence' is still cited today. In this BBC news feature from earlier this month, I discovered to my surprise (surprise because I am, of course, NOT a learned computer scientist, only a humble knowledge harvester/doctor) that Turing had tried to refute the some of the claims put forth by an equally famous predecessor, Ada Lovelace, regarded by some history's first computer programmer. It appears Ms. Lovelace was of the opinion that computing machines can never give us surprising insights. They can only put forth what we expect them to. While this seems straightforward to many of us, it didn't ring true to Mr. Turing. He proposed that if the computational power of computers continue to increase with time, what's to stop them from becoming as sophisticated as the human brain and (like the aforementioned organ) come up with some surprising ideas of their own.

Even today, it is argued that even Google's autocomplete function can sometimes suggest insightful queries that we, the searchers, might never have thought to ask. All this brings us back to Paul Allen; although the man is a supporter of artificial intelligence development and has a good number of institutions under his name which are doing just that, he denies the idea that computers will soon (as in less than a century from now) match or even outstrip the computing power of their creators' brains, the so called 'Singularity event'. He offers several points to support his rather surprising stance.

In good academic fashion, Ray Kurzweil, the originator of the term 'Singularity', offers a response to Paul's refutation, citing several counter points and also the possibility that his opponent may have misunderstood the crux of the problem. It is not my job to determine who is right and who is wrong. I'll leave that to you to decide. But what is agreed upon is that computers are indeed getting more powerful everyday. We are promised opportunities (like intelligent space probes that will explore the galaxy for us) and threats (like autonomous drones that will decide for themselves whether to kill or not). As a result, the future seems more murky and wonderful than ever before.