Abstract / bottom line:
The funding of the new and heavy non-profit research company OpenAI is (half a year after my first analysis) a further opportunity to voice my criticism against questionable approach to the AI topic. 
I think valid beyond all question is only this:
The world demands the right to govern and restrict AI in its global political and social consequences ! (Humanity demands, not to leave AI as a question of companies, national interests and technology.)
Steven Levy outlines in his Medium article how OpenAI was formed and he interviewed Elon Musk and Sam Altman on this topic.
I use the answers in the interview (repeated here shortened and analogously) to place my comments. (The order of numbering correlates not with that of the interview.)
OpenAI statement 1: We don't want the one single entity that is a million times more powerful than any human.
This is the only statement in the interview that tells roughly what Altman and Musk mean technically, when they use the term AI. Also in the Introduction of OpenAI it is avoided to say how far they go with the addition "general" to AI - AGI. (Technical goals of OpenAI see )
I think they are right to do so. - Personally I'm sure artificial general intelligence will
- not really work (not be the comprehensive and super smart, godlike mind)
- not be the threat to humanity (what AI indeed is).
(More precisely: Way before artificial general intelligence can surpass human intelligence, people will lose any freedom of independent thinking and free communication unfiltered by (lower) AI.)
Everything else in AI development that ties deep learning to narrow purposes will certainly lead to huge/ shocking extent of intelligent capabilities.
They call it AI, but it's actually little more than collecting and processing data, ultimately to replace people and to control people's lives and their perception of reality.
As an introduction, here are two examples of what "AI" is:
(A bit fictional and simplifying. If you don't like that, jump to statement 2.)
1) Today (in the pre-AI social system) it is a question of conditioned social behavior how smooth the occupation group of politicians performs and how smart mass media handle news. - To secure the established political power certain services have to expend a lot of effort. Hidden services have to observe, downvote (sometimes delete) the millions of unwanted (enlightening) blog posts and comments, to oppress or block unwanted (truly informative) websites and tweets in the search rankings, to observe or arrest RL activists etc. - and not hidden services have to secure, the masses are entertained by TV.
Under the AI social system it's all eased and automated. Something unwanted doesn't exist in the public perception. Every single social behavior of every single citizen on the planet is not only predictable, it is also known in what way this citizen is impressionable or by what TV supply his potentially wrong behavior is preventable – automated. It would be quite absurd to have still politicians or all these pre-AI measures. Everything is computable and it would be mad not to pre-calculate social events (formerly known as "elections" etc.) when it is possible to do so (at the push of a button).
The Will of the People and the public opinion are freely movable slider bars. And every position of the slider has simply its price tag. 
2) A high-speed swoop by a dozen killer dogs needs a few seconds from the horizon into the military camp. The robot dogs somersault when they stop from 80 MPH, dash zigzagways trough the camp while they furious jump up and down and fire their fast precision shots. Within 15 seconds (and by consumption of not more than 400 bullets) 400 well-trained soldiers are killed, except three infiltrated guys (the dogs are equipped with face recognition). The squad pulls back as fast as it appeared. (Boston Dynamics' "WildCat" video of 2013 gave a roughly first impression.)
Both examples are not exactly farsighted. They are trivial rather, but ..ask someone in 1916 what the Bikini Atoll will look like 30 years later..
OpenAI statement 2: An oversight over development of AI? - We are super conscious of safety and possible bad AI. If we do see something that we think is potentially a safety risk, we will want to make that public. 
Altman and Musk know, they have no chance to be any sort of "oversight". - Not because AI will be so 'mysterious' or 'mighty' or 'beyond any control', but because they know the following facts: The direction of technical development that is crucial for political and military power is
a) never a question of a R&D department ("we fund an open company")
b) never a question of good intentions ("we are conscious of safety")
c) never a question of public awareness ("we will make that public").
It's just the other way round: OpenAI's appearance is like a fake of an oversight. They do allay the people's alertness with something like: "We are the good ones." "It is all open to anyone." "We ensure democratic watching." etc.
Don't forget, they are billionaires and know in their circles –better than anyone else– how strategic think tanks work and by what force the U.S. are dominated.
I don't fault it, but be aware of it: they do not want to change this political situation (in what hand the power over future AI is), they want to have a foot in the door. 
Just like Google has, when it developes autarkic moving robots and vehicles (that can quickly mutate into intelligent killer units).
Bad AI? - Thankfully the clique that dominates US research and the US foreign policy and that pursues its own global strategy is not Dr. Evil.
OpenAI statement 3: We want AI to be widespread and personalized to the degree that you can tie it to an extension of individual human will.
Altman and Musk know also this: Already today smartphones do not bind "eco systems" (iOS etc.) to individual users but bind reversely users to an "ecosystem".
Just as today's smartphone users usually can't design their own OS or their apps, AI will be a system people can customise (if any) to a certain degree, but of course not control in its core.
Today a smartphone is in most cases a device not to control but to be controlled (see Part I of my essay: Kurzweil skews the truth).
As much more tomorrow's AI will control individuals. Not necessarily in a literal sense - control means mainly self-surveillance. Disciplining and surveillance is a matter of sociology, not of technology. Devices and software are only amplifiers - though AI will be an amplifier to a totalitarian extent. 
Having widespread AI (many AI instead of a few central ones) makes no difference. Disciplining/ enforced conformity stays the same - or if you prefer a more relaxed definition: the principle of submission to a system stays the same.
Already a service of light AI like Siri is too demanding to run as a personal (local) app. - Real AI will be a billion times more demanding. The only choice you will have, is to agree to the system or to reject, to be Borg or to be lost.
OpenAI statement 4: Because we are not a for-profit company we will stay focused on individual empowerment and making humans better.
Altman and Musk don't tell the whole story at this point. - Actually it makes no big difference whether you have a for-profit company with a subsidiary firm for non-profit venture or reversely a non-profit company with a subsidiary firm for making profit later on.
The difference is in PR and in the image of the brand. "OpenAI" is quite clever chosen. The project –one way or the other– wouldn't make profit anyway for the next years.
But when the AI issue reaches a breakthrough it will be a completely new game - commercially and politically. No matter something was formerly "open", "non-profit", "transparent", "freely owned by the world" or what ever.
OpenAI statement 5: We share all research results openly. In contrast Google will perhaps not share its developments with the public, as time rolls on and we get closer to something that surpasses human intelligence.
Well, "when we get close to something that surpasses human intelligence"..
.. Can you imagine what that will trigger? - compared with that, the U.S. reaction to 9/11 was giving a little cough. Within moments the claim to global dominance will go sky high. They will have to have no regard for anything anymore (but they are clever, they will do it softly). – It will not be a state, a social system or a company that takes over for ever. It is a clique that occupies silently and carefully all strategic positions in the world. – Already today no one has a reason to question this attitude "We are all one community of open-minded and enlightened human beings" – like frogs in the comforting warm boiler. 
Altman and Musk are not idiots. They know it. And they know, it will not only be Google that has to stop sharing (under threat of accusation of high treason) when it turns out some finding can be used as the ultimate weapon.
So, openness of OpenAI is given under reserve - or let's say it more clear: it is deception.
OpenAI statement 6: We compete for the best scientists with companies like Deep Mind, Facebook etc. Maybe the team grows to hundreds in the future.
They conceal the crucial point. In the endgame of the AI race it will not be about competition or how openness appeals to researchers. - It's the same topic as at the previous point: When the issue rises to the level of national interest (e.g. of the Manhattan Project), it will be a decision by law who recruits whom. And for guys like Altman and Musk only one thing counts: having at this stage a seat in the boat.
OpenAI statement 7: The OpenAI project will take however long it takes to build AGI. - It is probably a multi-decade project.
The easing in the question how long it will take, is pretended. Altman and Musk try to persuade us of "openness of the future". But the truth is, the next 30 years will bring this openness to a halt. 
They know, it is a race and I guess they admit, 2045 is a quite plausible date.
Any sort of planning beyond this magic wall of (everywhere seen as) looming and for ever unbreakable totalitarianism is somehow invalid.
OpenAI statement 8: If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.
This Elon Musk quote is marked as 'Top highlight' in the interview. - The logic behind this statement leads easily to more conclusions:
- If everyone can publish his own YouTube channel, then we will never again be manipulated. CNN, Nixon, G.W. Bush, Colin Powell etc. will never again be able to tell lies.
- If everyone can go to the polls, then we will never again have a government that plans and inflames conflicts in the world, send us to a Vietnam war etc.
I always thought Musk is a decent person. But this statement sounds like he is a fan of demagogical self-delusion.
My verdict on OpenAI (more fair: on what was told in the interview) :
At almost no point Altman and Musk did tell us the truth. – It represents normal US-American egocentricity when they simply do what is best for themselves and expect, the whole world will believe this is a bounty to the world.
But facing "2045" will probably be Doom to the world, it is not.
- Stephen Hawking warns artificial intelligence could end mankind .. 2014-12-02
- My comment on Hawking's warning .. 2016-01-19
- Andrey Kurenkov: Neural nets and deep learning .. 2015-12-24
- docs.Google: An introduction to Machine Learning .. 2016-01-17
- Transcription. Jaron Lanier on AI .. Audio conversation: 2014-11-14.
Date published at Edge.org and date of the comments: unknown
- Discussion at Hacker News about the conversation above .. 2016-01-06
- My comment on Lanier's statements .. 2016-01-14
- Bruce Schneier: Replacing judgment with algorithms that rate everyone's trustworthiness .. 2016-01-08
- reddit AMA: the OpenAI research team .. 2016-01-09
- HN comments on the reddit AMA above .. 2016-01-10
- World Economic Forum: The State of Artificial Intelligence .. 2016-01-21
- Martin Rünz on big data and data privacy .. 2016-02-14
- Robert Epstein: How the Internet flips elections and alters our thoughts .. 2016-02-18
- Analysis of walking and talking behaviors can predict how ideas spread through a population .. 2016-06-07
- Facebook has a mysterious team working on tech that sounds like mind reading .. 2017-01-12
- The rise of the weaponized AI propaganda machine .. 2017-02
- Sam Altman: The Merge .. 2017-12
 Part I of my essay on AI is more a wild-eyed discussion. – That's not my fault. Nick Bostrom and friends did preset the nature of the superintelligence topic.
 What is the top question for a young billionaire these days? - "Where is the battle arena for the ultimate jackpot?"
When Bill Gates 30 years ago dreamed his dream of global dominance, it was a bit laughable. Gates even missed the first years of WWW (not to mention his 10-years deficit in operating systems). But this time –with AI– it's a grave situation. - AI (and Big Data) tends in its nature to be total. - And lastly every big result in research and technology finds its way
- to be a weapon
- or to be an instrument to control people.
 Seen from the angle of risks and safety it is a merit of OpenAI's approach to do away with the puerile idea, AI could "escape" and assume an independent (bad) existence. (At least for this century that is not the point.) Nick Bostrom borrowed the fairy tale from 'Terminator' to be big in US pseudoscience and to misguide real science from this one fact:
AI is the biggest threat not because it will escape any control but –on the contrary– because it will be in the hands of humans, of governments and "clans".
The problem remains unsolved: How to tackle the threat? - Connected with OpenAI we experience (i.a.) two quite confused attempts:
- A blog post at slatestarcodex can be seen as largely typical of Bostrom-trained people. Their theory ignores the immediate threat by practical implementation of AI in human activities and sticks with the nebulous vision of AI as a personalized, almighty and unstoppable force that can be friendly to humanity or bad.
Accordingly the blog post fails coping with the contradiction between AI that gets "accidentally uncontrollable and harmful" and AI that is good-guys- made and intendedly bad. ("Bad" means always the question "Bad for whom?" - in this case, bad for the rich or bad for the people in the world.)
— It is stupid to ask the question "Will AI be friendly or bad". It will simply not work at all, if not everything is intended and planned. There are billions of ways to have bugs in a system. You may be sure, if you not know what you are doing or what you intend, the system does not make a sound when you press the start button.
For the mentioned blog post it is unthinkable, the US-made Hiroshima bomb, US-made AgentO poisoning of Vietnam, US-trained hit squads in Chile, Afghanistan, Syria etc. were intended ( intended ! ) crimes against humanity.
US-made Super AI could be just the same, right? – And there are a lot of whitewashers who try to obfuscate that by posting big long and mystifying AI tirades.
- OpenAI: "We prefer this: Many AI, in the heands of almost everyone." (see statement 8)
This downplaying tactic of nice guys reminds me of A. Einstein who sent the famous "Let's A-bomb!" letter to the US president and was famous speaker at pacifist meetings.
 For the US-American political doctrine it is common practice to plan and unleash overthrows abroad. - But it is unthinkable to question the own political system. Domestic US problems can be solved by technology, by technical means, or can't be solved at all. Everything else would be connected to (despised) European history of revolutions and bloody idealism. - So, also the biggest threat to humanity will be solved now NOT by political/social demands but by technical means: "Super AI will be a NICE AI because it will be our (a US) AI."
But by what right would that be the solution? Altman and Musk should think of Super AI by default as of a Russian, German or Chinese one (if the two guys are not able to see, U.S.-run AI will be even more dangerous). That would clear things.
There is the right of humanity to solve AI in its global political and social consequences, and not only as a question of companies and technology !
 Here is an example for the difference between (algorithmic) automation and (totalitarian) AI: Today each time when I post a comment at Hacker News (a service of Sam Altman's Y Combinator) my text gets dimmed (==censored/ soft-blocked) 20min later by the administrator or an algorithm (==automation). - In the future when real YC AI works at HN, they don't need to do that anymore. Because their AI will know what I want to post 20min before I know it. And each time it will send me a little malware AIware that blocks most keys on my keyboard. The only word that I will be able to type (as often as I like) is YC YC YC.. (This example is not meant to be funny)
 Isn't it given at any time: what happens in the future is in most aspects determined by the past? So, in what sense openness of the future could be different this time? - This time the future switches from being determined (it happens) to being planned (nothing important "happens" anymore). And people will not be able to see what should be wrong with it. They will live in a filter bubble.
 Here is an example (from cosmology/ "big crunch") of what I mean by "Already today enforced conformity works imperceptibly, like for frogs in the warm boiler".
 Addition 2016-06-20: Today, half a year after funding, OpenAI published its technical goals. – They strive to appear grounded and civil. But that raises issues to an even larger extent (I discuss in my Part III blog post on AI).