Saturday, 27 May 2017

On Designing Relevant AI Products for Humans


Many attempts at creating effective user experiences with artificial intelligence (AI) tend to produce news worthy results but not so much in terms of upping mainstream adoption.

Routine AI use for most consumers is still not an everyday thing (broadly speaking anyway, it may not be so if you look at certain demographic groups and interest groups such as tech enthusiasts). No one’s as dependent on AI assistants in the same way as we are so dependent on, say, Word processors or Google Search or mobile devices.

Intelligent assistants are still a novelty, and that may stem from the problem of their design. Most people don’t want to converse with machines on philosophical matters (though it ought to be cool), they just need to make a reference. People don’t want virtual girls in holographic displays asking them how their day went after coming home from work. And people seem to naturally want to troll chatbots in an attempt to explore the limits of their “intelligence”, and perhaps have a laugh while at it.

In fact, the more we attempt to define AI use in the context of human specific activities, the less it is used. It is perhaps a result of our bias towards emulating the all-knowing, all-powerful and very personal/witty AI found in films like Her or in sci-fi games like Halo. This kind of cultural hinting naturally leads many people to believe that, because the AI is being presented in a very human-like fashion, then it should act like a human. The fact of the matter is that, in almost all cases, you’re probably going to see through the facade before 10mins of continuous use have elapsed.
AI simply isn’t human, and designers shouldn’t help us pretend that they are.
Instead of trying to approach the question of AI usage when designing interfaces purely from the perspective of our sci-fi fantasies (who knows the future anyway?), we could perhaps start with the user; what do they actually need? They probably might not need an AI with a built-in personality.
They might instead need an AI that can do small but useful stuff efficiently and reliably. Successful intelligent agents that do just that usually lack any form of personality or similar bells and whistles. 

Take Google’s strategy; Google’s assistant and intelligent products haven’t a hint of personality besides the voice used for their voice search functionality. This cuts out a lot of cultural and psychological baggage from the conversation. Since the product clearly appears as a machine, people will not lead themselves to believe that their assistants can do human-like stuff (sparing the occasional witty answers and trivia). This can be beneficial because the user isn’t distracted by the product’s attempts at sounding human. The user is instead empowered without him/her even realising it. It just works.

I used to think building an assistant as cold and impersonal as Google Now would be a bad move, but I can now see the logic when comparing Google Now to the competition. Cortana looks so lively, and yet she disappoints me precisely because you keep looking for the human-likeness as it all but disappears with continued usage. Ditto Siri. I get so distracted by their apparent witty personality that I can’t seem to get at their actual functionality. Why do I need a witty robot in the house?

By far, the most interesting thing that Google has ever added to it’s Google Now app was no less impersonal, but it was extremely useful IMO. Now on Tap is able to recognise elements displayed at anytime on a phone, allowing a search to be done without leaving any given opened app. If it doesn’t detect what you want, you can simply highlight it and it would search for that. It is a perfect design; minimalistic, useful and brutally impersonal. It just works.

Intelligent apps shouldn’t be built to seem explicitly human; they should be built to get some actual work done. And they should do it without distracting. I know this might not be the kind of interesting personal robot some of us would have imagined, but it is probably the only way to make things less awkward between man and machine. The uncanny valley is not a place that you’d like your app to end up in, certainly not the rest of the consumer AI market and industry.

AI is still an emerging technology, like the Internet before it went mainstream. We are still trying to understand how to integrate the technology into the normal, everyday workflow of average human beings. Human-AI UX design choices may prove to be what makes or breaks consumer AI applications. The potential rewards for successfully breaking the design problem are tantalising; designers must keep seeking the perfect problem-solution fits that real people might care about and design around that.

Tuesday, 16 May 2017

Extending Our Intelligence: The Coming Era of Binary Sapience

Our disembodied extensions of our brains are set to become smarter and more closely linked with us

Our phones and other computing devices are more or less passive creations, bound at every step to man's programming. Now, however, they are becoming more and more responsive to us. More anticipatory towards our various needs, subtle tastes and wants; they are becoming more intelligent.
What does it mean for humanity exactly? How far can we take this?

If an intelligent algorithm can read and classify your current mood, and map your different mood swings over the course of a day through different data inputs with a higher accuracy than your spouse, could it be used to subtly use that data to make your mood better? Is such a state of affairs desirable, assuming it is possible given current technology. And what of that data anyway?

A broad personal artificial intelligence (AI) suite, perhaps more powerful and capable than anything our current best could one day be set lose on every individual on Earth to map out what makes them tick exactly (the data profile on each individual could be worth quite a lot of money to say the least), and then actively engage with them to achieve a desired individual state for each person as set by themselves (or others?). Such a machine could become the sort of ancillary intelligence we see in many works of science fiction; a true intelligence that will allow each person to understand themselves COMPLETELY, and then proceed in assisting us to make ourselves better than we can possibly ever be on our own, or even with the help of HUMAN professionals.

An individual human will cease being a single intelligence, but will instead become a binary intelligence, with the personal machine significantly expanding our cognition and identity at an almost subconscious level, while itself advancing itself via software upgrades and learning.

We can imagine a remote future where a newly born child is assigned such a personal intelligence the moment he or she is born. The intelligence will observe his development and consult the corpus of our knowledge to extrapolate the future possibilities of this child's personality and behaviour, to better understand how to manage them as he/she grows up.

Meanwhile, the child's parent's own intelligent aides can be synchronised with their child's, to enhance not just their parenting experience, but to enhance the way the literally think about their role as parents. We can keep going; we can even imagine training AI to school children (a personal tutor from the future). Given the knowledge these AIs will have access to, it is reasonable to assume that they will probably evolve to become the best tutors, or at the very least, tutor-student aids possible.

Why should any of this be desirable? Why would anyone want to create machines that not only replace humans in a variety of high level skilled tasks, but also become a very part of our very being. And we haven't gone into the ethics of the data used produced in the meantime. But make no mistake, we're heading in that general direction very fast. And the reason is painfully obvious.

While human beings are versatile in terms of initiative and creativity, we don't scale very well in the realm of perfect recall, infinite patience, logic, reasoning, et cetera. Obviously we will need tools to give ourselves these superhuman abilities. And of course, sometimes we might need something to light our way forwards, like when a writer has writer's block and needs a dose of much needed inspiration from his personal intelligence aid. The more these tools actively interact with us, the better we'll become at handling not just the modern world, but ourselves as well.

However, we shouldn't kid ourselves. Advancement comes with risks. The Wanna Cry worm that rocked the world over the weekend demonstrates not the fragility of our systems, but the fragility of our ability to get a grip on our systems. Our minds are the weak point (which the hacker knows and exploits), and at the same time our strongest point. We evolve through experience, and in this century, the sum total of our experiential knowledge is more accessible and more voluminous in scope and depth than ever before. It would be folly to not add on an extra layer of thinking cortex, our exocortex in the form of actively engaged, thinking machines to ourselves in order to make efficient use of our species' knowledge vault.

Monday, 15 May 2017

The Source of Innovation: The Open Commons Alternative


A Public Library; important venture but not a lucrative venture
Last week’s best read on the web from the contrary-to-my-personal/popular-beliefs section was, in my opinion, this.

The author has some interesting insights into the sources of America’s innovation, how said innovation comes mostly from government funding and, most importantly, how the private sector tends to NOT produce innovation, or even downright discourages it.

If you’ve been to a fair number of entrepreneurship and startup events, you’ll probably have heard the oft spoken claim that the private sector is the source of innovation. However, the guardian article I argues to the contrary; government is the source of all major innovations that exist today, and it is most certainly true. Nothing can outcompete government spending on risky ventures, especially private capital.

Unlike private capital, the spenders of public funds usually tend to invest in long term projects that are either too expensive to manage with private capital, will take too long to build with little in terms of direct monetary returns, are not prone to scarcity (despite being desirable) and therefore not profitable (for example national infrastructure for utilities such as water and sewerage). These kinds of ventures are extremely risky but important nonetheless. Just not important enough in the context of capitalism.

Having a system that can easily handle this kind of spending without being held accountable by the principles of capitalism (though obviously this does not mean such a system is exempt from the principles of sustainability and accountability) is what leads to the concept of public spending on public goods (this also includes military spending for the public good that is national defence, a necessary evil unfortunately).

However, having said all this, I’d also add that, although this kind of system of spending is important in the creation of public goods, it would be perhaps myopic to accept the that the MODERN STATE ARCHITECTURE is the only way logical to guarantee and execute public goods creation. There are other ways of deriving the same via different means that are, arguably, more open and accountable than classic states (or supranational entities for that matter).

Wikipedia comes to mind here: it is neither a government nor a private entity. It is a commons: a resource that is owned by the community. It’s resource is knowledge, freely created, curated and edited by ordinary people for anyone who can access it. Its funding is derived from generous donations (like taxes, but without the legal coercion). And, most importantly, it works without being directly dependent on government, or private capital. It is the middle ground of innovative possibilities that we all want.

Why do I fuss over this? Simple, I want my innovation to be guilt-free, clean, without any stain that taints government-sourced innovation (military R&D for war, sometimes without accountability) and private capital (questionable intents, inaccessibility through artificial scarcity and over pricing). 

Innovation that is purely and directly owned by the community is there in the wild. In fact, due to the fact that all innovations can ultimately be traced to the individual minds of ordinary but brilliant folks, we can at least agree to accept that the need for a different approach that doesn’t involve introducing potentially perverse incentives (government or private in nature) to these people is in order.

A collective (commons-based) investment scheme for open innovation might be the third alternative that we desperately need. The public spending versus private capital discourse should not blind us to this if we really want the march of innovation to continue to thrive and sustain itself. There is also space to talk about expanding this approach to create a directly ruled, commons-based system of government, but that’s probably a topic for best suited for another day.

Saturday, 13 May 2017

To Share or Not to Share: A Story of the Web, Fake News & Africa

“To remain the web’s weavers and not its ensnared victims, we must merge with our electronic ‘exocortex’, wiring greater memory, thought processing and communication abilities directly into our brains.”Hughes, James (18 November 2006). “What comes after Homo sapiens?”
Today while I searching for a term on my Google app, I noticed an extraordinary news item prominently displayed under the search bar. Its headline reeked of the usual sensationalism you’d expect from modern media outlets and, had it not have anything to do with a fellow East African Nation, I probably would have ignored it.
The Piece of Fake News link I Encountered. Nothing screams for attention more than “Breaking!
But it was too distracting and important a headline to ignore. A quick search helped confirm suspicion; this was most probably fake news being peddled by an unscrupulous website that doesn’t mind publishing pure junk for the ad revenue.

If you’re not familiar with the subject of the headline, here’s the short version: South Sudan, the world’s newest, independent country, has a president from the ruling Sudan People’s Liberation Movement (SPLM) party called Salva Kiir. He’s currently locked in a bitter rivalry with his own vice president named Machar. This internal strife has cost the country dearly ever since.

Regardless of how you feel about all this, news of the president’s death, given the situation and the nature of ethnic-based conflicts, would be a big blow to the country, nullifying any remote chance of peace for the foreseeable future. People sharing this online can potentially do irreversible damage to the country’s already delicate situation (perhaps it is of small comfort that most of the country’s impoverished citizens might not actually see this news piece).

Unless we do things differently, like what I did immediately after discovering that this was fake news.
To Fake to Share. I hope Big G still listens to feedback
To be honest, this does not change my personal stance on the general subject of fake news; fake news is a part of humanity and that will never change. It exists on all sides of the spectrum. What can change is how each and every one of us RESPOND to any news item we encounter wherever we are on the web. We should not get so easily caught up in our own web. If we are going to prosper in our web of information, WE THE USERS must learn to navigate it well. We must invest ourselves in keeping our web clean and healthy before we start asking corporations and institutions to keep it clean for us.

The tools already exist for this to happen. All we need now is to inculcate in ourselves the mental discipline needed to use these tools. If every human knew how to identify, verify and filter fake news from real news with an accuracy of 100% (assuming that is even theoretically possible given the constraints of the human brain), we’d never have to worry about a clickbait news sparking a civil war. And it can all start with a simple act, like not sharing a faulty news piece.

Saturday, 29 April 2017

Algorithms & Intellectual Inbreeding: Why We Must Emancipate Our Minds



We are spinning an algorithmic prison around our minds
This post was was inspired by one of Caitlin Johnstone’s commendable essays where she noted (as have I) that Medium’s “…‘recommended-for-you-based-on-your-interests’ feature has no idea how to even deal with people like me”.
When you stop learning, you’re dead. And learning only happens when you taste the elements. You learn when you suffer contact. And when you expose yourself to something as diverse as reality, you can only become intellectually stronger. You see more, you know more and unlearn more.

When you harbour a “ conviction” on the other hand, you raise a ‘truth’ above and beyond all reasonable discourse and raw, unflattering debate. Such is the nature of religions; they demand subservience to a higher power that is beyond ANY kind of reproach. This isn’t a healthy environment to perform an inquiry of any sorts. This is an echo chamber made temple. And we have plenty of temples today, shaped by our recommendation algorithms at Facebook, Youtube, Amazon and many others. Our minds are slowly being enslaved by the algorithms.

The creation of the scientific methodology, reasoning and logic was supposed to help us “find the truth” about the mysteries of reality. And yet, in this era of the algorithmic society, we see the creation of reasoning aids that lock us deeper and deeper into barren thought temples of our own making. And we unabashedly relish this, knowingly or otherwise.

The computer and the world wide network envisioned by past and present pioneers were supposed to increase accessibility to information, to assist in the exploration and synthesis of new ideas. While that has largely happened, it has also led to the creation of our “monsters from the id”.



There is a whole world to explore outside your intellectual pen
We are increasingly being cornered into our own, individual ways of thinking; it is perhaps apt to call it intellectual inbreeding. We are increasingly bereft of crucial cross fertilisation of minds, the kind the actually kindles real understanding and insight the advances society. Imagine an academic world completely bereft of cross-disciplinary activities; siloed departments refusing to share info. Where would be today? But that’s exactly the kind of society our algorithms are crafting for us today every time they show us what we might like, tweet or talk about next based on our flawed “beliefs”. No malevolent AI needed.

The end result is frightening; whole people can no longer understand whole communities. Flame wars and real wars break out because we refuse to comprehend each other. “Cultural clashes” happen because whole communities prefer to sit in a silo of their own making. And our algorithms that run social media, and yes even Medium, help to aggravate the situation.

How can you learn about the left side of politics if you only see your own side and vice versa? How can you understand Brexit if you refuse to leave abandon the “London mindset”, if only for a short while? How can you understand the consequences of regime change in Syria if you don’t really know what the alternative entails? How can you start a war when you don’t know your enemy beyond what your own culture perceives it as?

I should not be misconstrued as advocating for hippy peace and cultural understanding. I am more concerned about our mere survival as an intellectual race; learning through suffering contact with other minds. If you don’t touch it, you’ll never get it. Likewise, if you keep reading ThinkProgress all day, you’ll never understand why your BFF voted Trump.

Emancipate your mind today. Break free of the algorithms that rule the our minds and explore the works of minds far different from yours, if only for your own sake.

Tuesday, 25 April 2017

Enter ‘Wikitribune’: A Game changer for News Writing or just ‘Meh’?

Wikitribune Campaign from impossible on Vimeo.

Remember when I talked about how the world might benefit from embracing radical openness brought about by using decentralised wikis and similar software systems? Jimmy Wales of Wikipedia fame must have had similar thoughts because today he announced Wikitribune (see the video above).

In a nutshell, Wikitribune is a wiki site for news. The initiative is currently seeking funds via crowdsourcing to hire at least ten journalists for now to write quality news pieces that people online might care about. According to the initiative’s website, ordinary volunteers from the public will be able to ‘work’ with hired professionals to write news pieces that are well sourced, fact checked and created in an open, accountable manner.

Moreover, it has been stressed that the initiative will not rely on advertisements to fund itself. Instead, there will be a monthly subscription, though it seems this subscription will be a voluntary thing because the initiative’s site also mentions that ANYONE will be able to read ANYTHING (that is, no paywalls which is a big thumbs up from me) and ANYONE will be able to flag any piece that appears to be problematic or even fix it before sending it for reviewing (presumably by the hired professionals).

So far around 3370 people have contributed to the effort with 29 days left on the fundraising clock as of the writing of this article. Having said that, there has been some incredible amount of back-and-forth happening on the great forums of the web and social media regarding this effort and even some in mainstream media outlets are throwing in their two cents which is a good thing because Mr. Wales is right about one thing in all of this:
The news is broken. So broken in fact that some might say it is beyond saving.
If you’re surprised by that last part then therein lies the problem: this is not news at all. It only seems so now because, well, it pays to say it is so (it brings in them clicks) and its good political ammunition considering current political circumstances. In fact, people have been talking about the broken nature of news since the turn of the last century and maybe even before that (I am currently reading a book titled “How the News Makes Us Dumb” by C. John Sommerville, Professor of English History, Emeritus University of Florida, published in 1995 but it might as well have been written today). Some people have become so uppity over the issue of broken news and the nature of fake news that they seem to have forgotten that humans have been producing AND listening to fake news since the beginning of civilisation itself. Modern industry turned it into a product and online advertisement made it damn cheap to produce.

However, in the end, the most fundamental problem lies in people themselves. We want to be informed; we want our news, yes, but we want it every second, every day and we want it real cheap and, most importantly, we want to feel FULFILLED by it, and not offended. Gossip works in a similar manner (Instagram and Whatsapp gossip literally determines political events on the ground where I come from), and that’s why gossip packaged as a news product sells. And that is also why we have libel laws (and why certain religions label gossip as a sin, but let it be known that I completely believe in the supremacy of freedom of speech).

So, after all has been said and done, can Wikitribune fix all this? No, of course not. If the goal is to fundamentally produce everlasting, super-mega, out-of-this-world editing wars and drama (which will be news itself and will definitely sell) between the community and the professionals, then yes, it will totally be a smashing success (I don’t know how an initial team of 10 people will be able to engage with a global-sized community of volunteers while covering stories at the same time). Otherwise, it is trying too hard to solve an unsolvable problem, kind of like how some early proponents of bitcoin tried to “replace politics with mathematics” (that turned out to be a load of 💩).

But, it may set the stage for more accountable news creation, though IMHO Wikipedia can already do this quite well in its current state WITHOUT indulging in too much news creation (we already have a HUGE surplus of the latter). I am of the opinion that all this effort could be better spent at enhancing Wikipedia’s news aggregation capabilities (and real news journalists can chip in like all the rest) and improving the site’s editing system and what not.

I love Wikis; I love what they can do and I also recognise what they cannot do. Wikitribune may very well be able to do what it says will do but it will not be able to fix a flawed industry that serves a flawed need. Consequently, the people who sign up for this should prepare themselves to question the entire endeavour eventually when the time comes, just as it happened with Wikipedia back when it was a nasty place (it sometimes still is), and perhaps change it in some way, for better or worse. It has yet to fix anything, despite Mr Wales saying that they have “figured out” how to fix the news. They are still at the very beginning of the discovery process, and there is a lot of ground to be covered before Wikitribune can claim to have solved the problem.

In conclusion, I would argue that it would be much more useful to have a wiki site that offers tools and tips on how to be a great Wiki journalist, and then set the site’s subscribers lose on Wikitribune, or Wikipedia for that matter. In fact, I think Mr. Wales should probably focus more on improving Wikipedia rather than creating something that may very well end up becoming something like a Wikipedia clone, but with the journalists doing more editing and community engagement than actual journalism, among other possibilities.

I look forward to seeing what comes out of this initiative. Hopefully we might just learn something new from it all.

Wednesday, 12 April 2017

Our World is Changing: How Might We Master Change?

The Libyan Revolution, an example of change gone wrong. Image shows a tank outside the city of Misrata (Wikimedia Commons)
Change is guaranteed in Nature. Matter undergoes state transitions, ecosystems suffer violent episodes and recover and your body turns over your red blood cells every 90 days.
Even aspects of our personhood are in a constant state of flux; from childhood to puberty, from adulthood to old age, what we might consider as ‘us’ never stays quite the same. We always seem to find an alien of sorts to cringe at when staring at records of our past selves.

Far from being an existential conundrum, changes in the self is evidently our most precious characteristic, imparting us a degree of adaptability that has allowed us to survive a world where Change rules supreme. Besides that, in a world where humanity is immutable in belief and structure, would we discover anything new? Would we discover that tenacious form of ‘human spirit’?

And yet, despite all this, we also happen to live in a world where change receives very mixed reception. Few of us embrace change as we would embrace life itself. Even when most people would prefer change over the status quo, there are always those who would prefer the past over the present and its futures, no matter what.

The concept of ‘traditions’, ‘customs’ and perhaps even animism evolved partly out of our natural desire to always be connected to the past. This is something to celebrate, but tradition that fails to acknowledge change tends to breed instability in the long run as pressure inevitably builds up between the forces for change and the forces of status quo.

Therefore, in a world gripped in ever morphing new challenges, it is imperative that we seek to master the science of change through the creation of “tools for mastering Change”. Our present tools and structures for managing ourselves (i.e. governments, institutions, policies, etc) thrive on creating artificial stability. They are fragile. A single parameter change can quickly and violently end things (the global debt market is but one example) and we would be left helpless and unprepared, all ripe for extinction.
Any fool can create change and the tools for manifesting change already exist; the internet is the tamer kind and modern weaponry is the deadliest of them all.
The astute reader will note, however, that I draw a distinction between tools for creating change versus mastering change. Any fool can create change and the tools for manifesting change already exist; the internet is the tamer kind and modern weaponry is the deadliest of them all. What we lack are the tools to manage and understand change. Consequently, while we can establish change amongst ourselves (revolutions, protests for policy change, lobbying) we are COMPLETELY blind as to question of what those changes might entail and how to implement them without killing ourselves in the end.

To put it simply, while it is easy for us to implement said change, it is difficult for us today to manage that change once it is out there in the wild. We are fragile, glass figures that easily shatter and shatter ourselves in the process, all in the name of unquestioned, unmanaged, undisciplined change.

Mastering change is about more than merely predicting what lies ahead (an unreliable science in the hands of fools), its about embracing change just as we would like to live a good life without becoming hurt in the process. It is about us as a species learning to drive confidently into this brave new world of ours while not under influence of our naive selves.

Monday, 10 April 2017

Good Governance Made Simple

Painting depicting Ancient Rome’s Senate

I wrote this short essay on a whim: is it possible to reduce the concept of “good governance” to its most succinct components such that a severely distracted millennial could comprehend it instantly and even perhaps make something of it all? What follows is my take. You be my judge.
I remember a time when the term ‘good governance’ was being thrown around a lot. Often it was used as a placeholder for modern liberal democratic states, classically thought of as the pinnacle of human civilisation and progress. Indeed, that may be so but are all nascent liberal democracies destined to a life of prosperity? The evidence is shaky at best, non existent at worst. Perhaps it is no wonder then that the original term seems to have died a quiet death over the turn of the last decade. 

Good governance these days seems to live up there with the rest of the world’s pies in the sky. People are on the lookout for something different, if the election of Trump is anything to go by. It’s unfortunate that things have turned out this way. Good governance is still arguably important to all of us, despite the up tick in global cynicism towards institutions that can potentially support it. However, perhaps the idea is in need of rejuvenation, now more than ever, before it can become a rallying point for good, honest people to build awesome and relevant institutions again.

So, first off, what exactly constitutes good governance any way? Depending on who you talk to, good governance abides by several principles such as accountability, transparency, citizen participation, efficiency and responsiveness in giving service, rule of law, and inclusiveness. So far so good, but can we make all this any simpler? Let’s start from the perspective of people.

If people cannot natively sense that they are benefiting from an institution’s presence and activities, then the stage is all set for a reset in the form of tumultuous, revolutionary resets. The key thing then is obvious: a structure that does not serve cannot sustain its own existence, or the people who depend on it for that matter. So, through reasoning, we can conclude that defined, efficient structures that actually meet demand with all the trappings of good governance are desirable. Great!

However, structures are only as good as the weakest link in them and the weakness stems from, again, people. This weak link emerges in the form of general illiteracy and the general inability to comprehend the structure, not only for the purpose of using it, but also for the purpose of improving. You cannot make progress with a mechanism you cannot comprehend, let alone try to comprehend without access to basic cognitive support (education, open knowledge, rigorous, open public discourse, open information networks, open tools, etc).

Therefore, from the above we identify two main components that need to be in place in order to even begin to implement good governance: a defined structure that can be documented in an open manner, is agreed upon and understood by every stakeholder to the maximum, and lends itself to renewal through intelligent, informed self-amendment by ALL involved stakeholders (I default to the belief that that which may affect all must be decided by all). Structure and knowledge; these, I would therefore argue, are the very foundations of good governance at any level of human organisation.

Sunday, 9 April 2017

Free & Open Technology: A Law unto Itself & why only You can defend it

“Liberty guiding the People” by Eugène Delacroix, 1830

Hello folks, it's been a while since I posted something here. Figured it was high time I made atoned my sins and rejuvenate this blog. I have a Medium page which you can check out here. There's plenty of stuff over there but you can also stay here to look around at some of my more ancient musings. I'll try to see if I can bring this blog back from the dead. Until then, enjoy! Abraham

The nation state is what it is because it was enabled by technology. Society’s values, infused with nationalistic enthusiasm and vigour, especially in the early part of the last century during the Great World Wars, helped play a role to instigate the initial enabling, but after that, technology played an even greater role in sustaining the rise of the nation state.

And then a cultural revolution happened: society’s values slowly changed over the later part of the last century, becoming more liberal and open to radical ideas and thought. It was slow in the sense that relatively few people here and there across the United States and the rest of the world truly subscribed to the new cultural movements. Many ‘ordinary’ folks at the time still considered such people to be ‘queer’, strange or downright unnatural and that sentiment was reflected in their attitude towards such radicals.

Amidst all this, the 1% rule was already at work; these small groups of radical thinkers spawned not just weird cultural thought. Some also spawned incredible technological innovations for the masses (people like Richard Stallman come to mind here). Moreover, these technological tinkerers of the time infused their work with their radical values, spawning the Free and Open Source Software movement (FOSS). By the time the internet and web matured into useful technologies, the technological hippies were more than ready to take over the Earth.



The creation of FOSS challenged orthodox thinking on scarcity and copyright. It insulted conventional laws and corporate thinking. The war on Free and Open source and open innovation by the real world’s laws and institutions since then has become a war of attrition, but the gains made by the open source movement have been unmistakable and substantial.

Nevertheless, FOSS may have won many battles, but the war has yet to be decisively won. Open encryption is the latest assault on conventional thinking with regards to national/international security versus privacy issues. In fact, as the title of this article suggests, open encryption has enabled technology, and by extension the society that uses it, to become a law unto itself; world governments, despite their best efforts at reducing the effectiveness of open encryption to police the world and wage cyberwars, have found it more and more difficult to penetrate the increasingly sovereign Internet’s defences. This does not in any way mean that technology is fool proof (it is reasonably secure until the next open source enabled security alert) nor should it be misconstrued as an ‘evil’ development,however much the UK’s Home office would have you believe.

To call secure and private communications bad would be like calling a fence/wall and home security together with trespass laws bad. It simply isn’t. In fact, the recent terrorist attack at Westminster, London, UK had more to do with the utter incompetence of the government’s monitoring of the perpetrator than his use of WhatsApp. But the latter proved to be an irresistible scapegoat I suppose.
Take the recent successful repulsion of net privacy laws in America and the truth becomes obvious: the global war over authority is just beginning. Open technology (and societal values encoded in it) is a law unto itself but the nation state is pushing back. It is up to all of us to make the next move.

FOSS is a very, very political statement. So let’s send a message to the lumbering giants of the real world by building things that resist control by them today.
“Value your freedom or you will lose it, teaches history. ‘Don’t bother us with politics,’ respond those who don’t want to learn” ~ Richard Stallman