Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
 

Doug the Dem

(1,297 posts)
Sun Jul 16, 2017, 03:02 AM Jul 2017

Elon Musk Says Artificial Intelligence Is the Greatest Risk We Face as a Civilization

Source: Fortune

David Z. Morris
Jul 15, 2017

Appearing before a meeting of the National Governor’s Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as “the greatest risk we face as a civilization” and called for swift and decisive government intervention to oversee the technology’s development.

“On the artificial intelligence front, I have access to the very most cutting edge AI, and I think people should be really concerned about it,” an unusually subdued Musk said in a question and answer session with Nevada governor Brian Sandoval.

Musk has long been vocal about the risks of AI. But his statements before the nation’s governors were notable both for their dire severity, and his forceful call for government intervention.

“AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late," he remarked. Musk then drew a contrast between AI and traditional targets for regulation, saying “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.”

Those are strong words from a man occasionally associated with so-called cyberlibertarianism, a fervently anti-regulation ideology exemplified by the likes of Peter Thiel, who co-founded Paypal with Musk.

Read more: http://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/



That last line speaks volumes: if a guy so fiercely opposed to government regs is now CALLING FOR them...!
52 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Elon Musk Says Artificial Intelligence Is the Greatest Risk We Face as a Civilization (Original Post) Doug the Dem Jul 2017 OP
Religion fascism and the republican party is far greater threat Matthew28 Jul 2017 #1
dr who had a show about this earlier this year. BE HAPPY! pansypoo53219 Jul 2017 #2
Elon Musk clearly hasn't been paying attention to what's going on with climate change (n/t) Spider Jerusalem Jul 2017 #3
yup, climate change JI7 Jul 2017 #4
My 12 year old nephew explained that the s series is meant to fund the next car, a 10-15K mahina Jul 2017 #15
Right, we'll cut CO2 by making sure that everybody in China and India . . . hatrack Jul 2017 #36
Obviously that's not what I said. mahina Jul 2017 #37
Well, I guess you thought wrong . . . hatrack Jul 2017 #38
Especially with Natural Intelligence being as low as it is these days ck4829 Jul 2017 #5
How about Soxfan58 Jul 2017 #6
Nope Duppers Jul 2017 #7
He's very well aware of climate change. Dave Starsky Jul 2017 #19
Can't they just unplug it if it gets too unruly? Blues Heron Jul 2017 #8
No. Because we won't be able to tell between AI and Humans JI7 Jul 2017 #10
There's a book you should read: LudwigPastorius Jul 2017 #43
Ignore Elon at our own peril truthisfreedom Jul 2017 #9
Has he talked of global warming ? People can be a genius in certain areas JI7 Jul 2017 #11
He started an electric car company to get us off fossil fuels killbotfactory Jul 2017 #16
He founded Tesla Ruby the Liberal Jul 2017 #24
The same has been said about Stephen Hawking. Duppers Jul 2017 #13
Well, duh...we all knew that after Space Odyssey and Terminator.... Pachamama Jul 2017 #12
Because AI robots can cool themselves after we die off from the heat dalton99a Jul 2017 #14
A lot of AI jimmil Jul 2017 #17
LOL. He's an undertaxed billionaire, so he must be an expert in ... everything!!111!!1 PSPS Jul 2017 #18
Nope. It's lack of human intelligence thucythucy Jul 2017 #20
That's the underlying cause of it all. Duppers Jul 2017 #21
After witnessing Trump Gore1FL Jul 2017 #22
i cant stand fear for fears sake , stupidity for stupidtys sake . this AllaN01Bear Jul 2017 #23
yup, I watched the AI of games evolve. Downright scary computer controlled armed drone swarms, Sunlei Jul 2017 #25
He's not talking about fantasy far off in the future. yallerdawg Jul 2017 #26
We see how bots affect Twitter with respect to politics. Bots with sufficient ai SweetieD Jul 2017 #50
Long before we'd be under attack MurrayDelph Jul 2017 #27
More dangerous than the Artificial Stupidity of the Trump Administration? Nitram Jul 2017 #28
He's not talking about Windows 20 or the iPhone 15 mn9driver Jul 2017 #29
This message was self-deleted by its author mn9driver Jul 2017 #30
I really hate the term artificial intelligence Stryst Jul 2017 #31
Not even close. Binkie The Clown Jul 2017 #32
Didn't one of his driverless cars recently get someone killed ? ucrdem Jul 2017 #33
I like the guy but he's full of beans on this one. gilbert sullivan Jul 2017 #34
On the contrary, AI is the ONLY way humanity will ever have a meaningful presence beyond earth. hunter Jul 2017 #35
The ending of Kubrick/Speilberg's "A.I." movie. yallerdawg Jul 2017 #52
Real Stupidity is the more immediate threat, however. 6000eliot Jul 2017 #39
The first victims of AI will be human jobs andym Jul 2017 #40
I'm not sure there is anything such as "generalized intelligence." hunter Jul 2017 #42
Put AI in charge Turbineguy Jul 2017 #41
He needs to wait for a government to be elected that is interested in science and society mdbl Jul 2017 #44
The other kind of artificial intelligence is dangerous too. Kablooie Jul 2017 #45
Wrong. Lack of intelligence is. Owl Jul 2017 #46
Because they will see that the rich hoarding the planet is bunk as hell. Shandris Jul 2017 #47
This was debated on IQ2 pablo_marmol Jul 2017 #48
I believe Elon is right. We can't imagine the scenario now just like Julius Caesar SweetieD Jul 2017 #49
The greatest risk? No way... rather hyperbolic. defacto7 Jul 2017 #51

mahina

(20,447 posts)
15. My 12 year old nephew explained that the s series is meant to fund the next car, a 10-15K
Sun Jul 16, 2017, 07:33 AM
Jul 2017

Last edited Sun Jul 16, 2017, 01:07 PM - Edit history (1)

Vehicle for the Indian and Chinese markets. The goal is to cut the rate of increase of Co2.


hatrack

(64,305 posts)
36. Right, we'll cut CO2 by making sure that everybody in China and India . . .
Sun Jul 16, 2017, 12:55 PM
Jul 2017

Wait, what?

Dave Starsky

(5,914 posts)
19. He's very well aware of climate change.
Sun Jul 16, 2017, 08:39 AM
Jul 2017

I believe he sees the rapid advance of AI to be a greater threat. He may be right about that.

JI7

(93,264 posts)
10. No. Because we won't be able to tell between AI and Humans
Sun Jul 16, 2017, 06:19 AM
Jul 2017

On just regular outer appearance.

LudwigPastorius

(14,243 posts)
43. There's a book you should read:
Sun Jul 16, 2017, 02:58 PM
Jul 2017
https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies

It'll scare the crap out of you.

The upshot is that, once a human-level intelligent computer is made and given the ability to modify its own programming, the amount of time it might take for that machine to achieve superintelligence could be so fast that we'd have no way of stopping it. (Think of the analogy of a nuclear chain reaction.)

The author weighs different proposed strategies to control a superintelligent AI, but no one's come up with one that would really work yet.

JI7

(93,264 posts)
11. Has he talked of global warming ? People can be a genius in certain areas
Sun Jul 16, 2017, 06:21 AM
Jul 2017

But not outside of it. Like ben carson.

Duppers

(28,468 posts)
13. The same has been said about Stephen Hawking.
Sun Jul 16, 2017, 07:06 AM
Jul 2017

And he says we should be colonizing other planets asap.


Just because someone is extremely good in one or two areas doesn't mean they are seeing the bigger picture.

I am not saying that AI could not become a huge threat but that our situation with Global Climate Change is the greatest threat to our existence.




Pachamama

(17,540 posts)
12. Well, duh...we all knew that after Space Odyssey and Terminator....
Sun Jul 16, 2017, 06:21 AM
Jul 2017


And then came Trump....

jimmil

(641 posts)
17. A lot of AI
Sun Jul 16, 2017, 08:05 AM
Jul 2017

research was going on at MIT in the 70s when I was there. The long and the short of it is we don't even know how we learn much less how to make a machine learn. I am sure those days of 20K lines of LISP code to do nothing really have progressed but how creatures actually learn is still not understood.

Gore1FL

(22,856 posts)
22. After witnessing Trump
Sun Jul 16, 2017, 09:09 AM
Jul 2017

I'd argue it's not artificial intelligence, but real stupidity we have to worry about most.

Sunlei

(22,651 posts)
25. yup, I watched the AI of games evolve. Downright scary computer controlled armed drone swarms,
Sun Jul 16, 2017, 10:44 AM
Jul 2017

armored alarm that shoots anything that moves, tiny tanks, planes, trucks & cars AI controlled. high power lasers from satellites

This can ALL be hacked or mistakes can be made. 'bugs' bugs bugs.

It will be hacked.

yallerdawg

(16,104 posts)
26. He's not talking about fantasy far off in the future.
Sun Jul 16, 2017, 10:52 AM
Jul 2017

"But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. “They could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.”

Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war."

SweetieD

(1,673 posts)
50. We see how bots affect Twitter with respect to politics. Bots with sufficient ai
Sun Jul 16, 2017, 10:57 PM
Jul 2017

could do a lot of harm via social media. In ways we can't imagine.

MurrayDelph

(5,723 posts)
27. Long before we'd be under attack
Sun Jul 16, 2017, 10:54 AM
Jul 2017

by Artificial Intelligence,

we'll be done in by Genuine Stupidity.

mn9driver

(4,826 posts)
29. He's not talking about Windows 20 or the iPhone 15
Sun Jul 16, 2017, 11:13 AM
Jul 2017

He's talking about a very straightforward logical progression.

Homo Sapiens didn't become the dominant species on earth because we are stronger, faster or bigger. We aren't in the process of causing the 8th great extinction because we are evil.

We dominate at the expense of other life because we are smarter and we use that intelligence to adapt and exploit at a speed no other species can match. That's it.

If we create an intelligence that beats us in terms of speed, creativity and accuracy, it will inevitably become the dominant species. It's pretty obvious and pretty simple. Musk, Gates, Hawking and many others can see and accept that logic.

Response to Doug the Dem (Original post)

Stryst

(721 posts)
31. I really hate the term artificial intelligence
Sun Jul 16, 2017, 11:19 AM
Jul 2017

All intelligence is artificial; it's an emergent property rising from the interactions in our nervous system. What Musk is afraid of is a machine intelligence.

And I don't think we can do anything to stop one from eventually being created. Musk and Hawking are afraid of an A.I. (their term) being created top down (the thing being purpose built in one piece) , while we're creating smarter and smaller computers, networking them together, and then we're all going to act surprised when an intelligence emerges from those complicated reactions.

How many devices in your home, right now, have a processor and connect either to the internet or your home router? How complicated of a machine brain are we developing with the internet, right now?

Binkie The Clown

(7,911 posts)
32. Not even close.
Sun Jul 16, 2017, 11:25 AM
Jul 2017

Mass starvation as food crops fail due to global climate change will bring down technological civilization before AI reaches that point. As massive waves of climate refugees flood into the population centers of the world, the problem will become even more acute. Food riots, starvation, lack of potable water, and in that weakened state, disease and violence will deliver the coup de grâce to those pitiful survivors of the first waves of starvation and war. AI is no threat at all once the electric grid collapses.

 

gilbert sullivan

(192 posts)
34. I like the guy but he's full of beans on this one.
Sun Jul 16, 2017, 12:19 PM
Jul 2017

Whatever eventually emerges as "artificial" intelligence will be an inevitable product of evolution which of course is
"natural"...or as close to it as anything can really be. If it essentially replaces humans, so what?, it's not as if we did much
with what we allegedly gained when some guy ate an apple...

hunter

(40,392 posts)
35. On the contrary, AI is the ONLY way humanity will ever have a meaningful presence beyond earth.
Sun Jul 16, 2017, 12:26 PM
Jul 2017

I think that's ultimately what's bothering Musk and Hawkings. In their vision of the future humans will be spreading throughout the solar system like some kind of plague even if the earth itself becomes uninhabitable to humans because of runaway global warming, nuclear war, hit by a huge asteroid, etc. (I'm such a pessimist I suspect that any such asteroid would have been sent on that collision course with earth by the humans living in space.) If the earth becomes uninhabitable to humans it will most likely be humans who made it so.

Personally, I doubt humans will ever have any significant presence in outer space beyond low earth orbit. It's simply too hostile an environment for our biology. Visitors to the International Space Station are somewhat protected from high energy particles by earth's magnetic field. The Apollo astronauts, who were beyond earth's magnetosphere for just a few days, probably suffered significant damage caused by high speed particles and hard radiation ripping through their bodies. This radiation was so intense they could see it passing through their heads as flashes of light.

Mars ain't no place to raise a kid; in fact it's cold as hell. And radioactive. And poisonous. But we've demonstrated we can build robots that survive in that harsh environment. At some point we might be building robots smart enough that we can relate to them the same way we'd relate to human explorers. Tell us what it's like there... Some AI might even be able to relate their experience in poetry and song.

Should humanity survive the next thousand years space will belong to our intellectual children, not our biologic children. If we don't survive, it won't be anything surprising. This planet has seen many innovative species grow exponentially and then crash and fade away into extinction. We are not the first, and we won't be the last.

I'm someone who believes the universe is full of life, but not in a Star Trek way. Faster-than-light travel and time travel are simply not possible in this universe. Any intelligent life that has successfully spread beyond their planet of origin is inaccessible to us, living in universes of their own creation or in aspects of this universe beyond human perception.

As for economic disruptions caused by AI, that's not a technical problem, it's a problem of our primitive beliefs, racism, nationalism, and destructive work ethics. This thing we now call economic "productivity" isn't productivity at all, in fact it's a direct measure of the damage we are doing to what's left of the earth's natural environment and our own human spirit.

A Universal Basic Income would be one approach to economic disruptions caused by AI. Free education would be another. That's one a bit of Star Trek futurism that's possible today, just as Star Trek cell phones and tablets became possible.




yallerdawg

(16,104 posts)
52. The ending of Kubrick/Speilberg's "A.I." movie.
Mon Jul 17, 2017, 07:54 AM
Jul 2017

AI is possibly our next step in the evolutionary process. Our "intelligence and humanity" moving past carbon-based "lifeform" to something more durable and potentially immortal. "The ghost in the machine."

Of course, evolution tends to be a rather destructive force on the less "successful" ancestral species.

andym

(6,053 posts)
40. The first victims of AI will be human jobs
Sun Jul 16, 2017, 02:20 PM
Jul 2017

Ai is able to out perform humans in many specialized tasks already-- human jobs that involve thought and analysis will be on the line in a few years. Meet your new AI accountant, or marketing analyst. Before that self-driving cars, trucks, automated restaurants and retail stores will end human blue collar jobs just as surely as cars ended the days of the horse and buggy. When AI achieves generalized intelligence, watch out all bets are off.

hunter

(40,392 posts)
42. I'm not sure there is anything such as "generalized intelligence."
Sun Jul 16, 2017, 02:50 PM
Jul 2017

Our minds are a big toolbox, including tools for choosing which tools are appropriate in any given situation.

The failures of human cognition are every bit as spectacular as the failures of current AI technology.

Anyone who voted for Trump was using the wrong tools, as were all the people in "here, hold my beer," YouTube videos.

I also know that we humans are crazy to feel alone in this universe. We look for "intelligent" life in outer space even as we share (or don't share) the planet with other intelligent and sentient beings who have much more in common with us than any space alien ever could. We humans are idiots not to recognize that.

mdbl

(8,137 posts)
44. He needs to wait for a government to be elected that is interested in science and society
Sun Jul 16, 2017, 07:25 PM
Jul 2017

right now we have neither.

Kablooie

(19,053 posts)
45. The other kind of artificial intelligence is dangerous too.
Sun Jul 16, 2017, 07:53 PM
Jul 2017

The kind that thinks they are smart by worshipping Trump.

 

Shandris

(3,447 posts)
47. Because they will see that the rich hoarding the planet is bunk as hell.
Sun Jul 16, 2017, 09:01 PM
Jul 2017

I keep hearing all they scary "Ehrmagawd it's a Terminator!" stuff, and the scary stuff they shoved into roleplaying games (some of which were turned into legitimately interesting left-leaning games like Eclipse Phase), and the scary sci-fi classics (Hi, Dune!), but what I'm also seeing is that the groups warning that they're so dangerous are the those with the most to lose, wealth wise. I'd be somewhat interested in some more focus being put on that instead of "OH MY GAWD YOU'LL LOSE YOUR JOBS!!!!!!" without the first word of why jobs would even exist at that point.

The underlying assumptions must be challenged. AI is the canopener...or can be.

SweetieD

(1,673 posts)
49. I believe Elon is right. We can't imagine the scenario now just like Julius Caesar
Sun Jul 16, 2017, 10:55 PM
Jul 2017

couldn't contemplate what a nuclear bomb might do to the way war is waged.

From articles I've read, I think we are right to be scared.

defacto7

(14,160 posts)
51. The greatest risk? No way... rather hyperbolic.
Sun Jul 16, 2017, 11:43 PM
Jul 2017

Anyway, what's he afraid of? We're killing ourselves off pretty well without help from AI. I hope there IS an artificial intel that can do better than we did.

Latest Discussions»Latest Breaking News»Elon Musk Says Artificial...