Jaron Lanier recommends a rational view on artificial intelligence, focused on practical, professional results. – But is his downplaying tactically, denying the political strategic plan behind AI?
➡️ Edge.org/conversation: Jaron Lanier – The myth of AI. (2014-11)
People have been disempowered to serve what were perceived to be the needs of a deity, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. [..] The effect of the new religious idea of AI is a lot like the old idea, religion.
The approach to work on AI is for Lanier (as a software architect) of course to work on something great and to do good.
When some result is bad, in his view it can only mean, it's not fully developed yet. When an algorithm is manipulative there must be a conceptual weak point in it.
Well, Lanier is not naive. He knows, there are algorithms that are intendedly manipulative or have otherwise terrible purposes. But it is a characteristic of the American culture: it's unthinkable to talk about the evil side of the U.S. – "U.S." means: We are the good ones. Period.
O.k., if you like. – But for the future of AI we have to consider:
- Space rockets were developed initially to carry nuclear weapons in event of war around the globe (manned space flight is a side product).
- Integrated circuits were developed to test, operate and navigate nuclear weapons (PCs are a side product).
- The Internet was developed to communicate in event of nuclear war (online newspapers, YouTube etc. are a side product).
- GPS was developed to guide missiles and vessels on targets (information systems for tourists and drivers are a side product).
And today development of AI is underway .. guess to what purpose.
Face recognition, real-time translation, autonomous driving units etc. are all (militarily useful) side products too.
At this point Jaron Lanier's rational explanation of AI is just as bad as Bostrom's nebulous semi-religious fairy tales. It distracts from the real nature of the strategic think tank that rules the U.S., it distracts from their intentions in development of AI.
My sense is that the current generation of entrepreneurs who are ready to take a look into technology are for the most part not really money motivated. They might enjoy parallel influence. They might enjoy social status but not particularly money.
Lanier is right, greed for money is not the point. But he downplays the even bigger thrill to control others. (After all Lanier earns a living from propagating this American exceptionalism, the right to control others.)
AI is not a fate that "happens to us". AI is planned ahead as final means to gain global dominance and to control people irreversibly.
AI is neither a higher power nor only a useful tool. - AI is intended "fate" (that fulfils long before AI comes close to human intelligence). 
Some call this event the arrival of singularity. I would prefer to modify Immanuel Kant's famous definition (Was ist Aufklärung?, 1784):
 I migrated my Lanier article from Google+ (an online service that is closed now).
 Lanier doesn't believe in the gloriole of Wikipedia as 'fundamentally democratic' (Lanier calls it Digital Maoism). And I think he is right.
Not 'everyone can contribute' to Wikipedia but everyone can accept to contribute what conforms the political power.
Today it's a sociological process to filter what's allowed to write on Wikipedia and what not. – Why doesn't Lanier see it is even worse when the filtering is not a sociological process anymore but an AI algorithm employed by the political power?
➡️ Jaron Lanier Interview: Keep calm – it's just technology! (telekom.com)
I totally disagree with Lanier's headline "Keep calm – it's just technology!" and I totally agree with this statements in the interview:
To the degree you believe that digital technology does anything by itself, then it has fooled you. Everything is still done by people and only human responsibility can have any effect, whether positive or negative.
There is no such thing as an intelligent machine. That’s a fantasy that’s used to manipulate people who will then accept direction from the machine. [.. ] I think we all tend to believe that the technology will somehow do something for us when, ultimately, it is only our own responsibility.