The discussion on safety of artificial intelligence

OpenAI and AI as terms of political deception  (Part II )  [1]

Contents

Abstract / bottom line:
The funding of the new and heavy non-profit research company OpenAI is (half a year after my first analysis) a further opportunity to voice my criticism against questionable approach to the AI topic. [2]
I think valid beyond all question is this:
The world demands the right to govern and restrict AI in its global political and social consequences !   (Humanity demands, not to leave AI as a question of companies, national interests and technology.)

Steven Levy outlines in his Medium article  how OpenAI was formed and he interviewed Elon Musk and Sam Altman on this topic.
I use the answers in the interview (repeated here shortened and analogously) to place my comments. (The order of numbering correlates not with that of the interview.)

OpenAI statement  1:  We don't want the one single entity that is a million times more powerful than any human.

My comment:

xxxx

OpenAI statement  2:  An oversight over development of AI? - We are super conscious of safety and possible bad AI. If we do see something that we think is potentially a safety risk, we will want to make that public.  [3]

My comment:

xxxx

xxxxx

My verdict on OpenAI  (more fair: on what was told in the interview) : 
At almost no point Altman and Musk did tell us the truth.  – It represents normal US-American egocentricity when they simply do what is best for themselves and expect, the whole world will believe this is a bounty to the world.
But facing "2045" will probably be Doom to the world, it is not.

Short URL: 

[1]  The background image uses an illustation from Pe ter-Michael Carr uthers

[2]  xxxx