Elon Musk Says Artificial Intelligence Is the Greatest Risk We Face as a Civilization
Source: Fortune
David Z. Morris
Jul 15, 2017
Appearing before a meeting of the National Governors Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as the greatest risk we face as a civilization and called for swift and decisive government intervention to oversee the technologys development.
On the artificial intelligence front, I have access to the very most cutting edge AI, and I think people should be really concerned about it, an unusually subdued Musk said in a question and answer session with Nevada governor Brian Sandoval.
Musk has long been vocal about the risks of AI. But his statements before the nations governors were notable both for their dire severity, and his forceful call for government intervention.
AIs a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, its too late," he remarked. Musk then drew a contrast between AI and traditional targets for regulation, saying AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.
Those are strong words from a man occasionally associated with so-called cyberlibertarianism, a fervently anti-regulation ideology exemplified by the likes of Peter Thiel, who co-founded Paypal with Musk.
Read more: http://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/
That last line speaks volumes: if a guy so fiercely opposed to government regs is now CALLING FOR them...!
Matthew28
(1,857 posts)pansypoo53219
(22,908 posts)Spider Jerusalem
(21,786 posts)JI7
(93,264 posts)mahina
(20,447 posts)Last edited Sun Jul 16, 2017, 01:07 PM - Edit history (1)
Vehicle for the Indian and Chinese markets. The goal is to cut the rate of increase of Co2.
hatrack
(64,305 posts)Wait, what?
mahina
(20,447 posts)I thought I blocked you years ago.
hatrack
(64,305 posts)Nice edit, btw. Bless your heart!
ck4829
(37,433 posts)Soxfan58
(3,534 posts)Lack of intelligence in the Oval Office.
Duppers
(28,468 posts)He has not read nearly enough about *Global Warming*.
Dave Starsky
(5,914 posts)I believe he sees the rapid advance of AI to be a greater threat. He may be right about that.
Blues Heron
(8,430 posts)JI7
(93,264 posts)On just regular outer appearance.
LudwigPastorius
(14,243 posts)It'll scare the crap out of you.
The upshot is that, once a human-level intelligent computer is made and given the ability to modify its own programming, the amount of time it might take for that machine to achieve superintelligence could be so fast that we'd have no way of stopping it. (Think of the analogy of a nuclear chain reaction.)
The author weighs different proposed strategies to control a superintelligent AI, but no one's come up with one that would really work yet.
truthisfreedom
(23,520 posts)The man is a genius.
JI7
(93,264 posts)But not outside of it. Like ben carson.
killbotfactory
(13,566 posts)Ruby the Liberal
(26,612 posts)Musk is at the forefront of getting us off of fossil fuels.
Duppers
(28,468 posts)And he says we should be colonizing other planets asap.
Just because someone is extremely good in one or two areas doesn't mean they are seeing the bigger picture.
I am not saying that AI could not become a huge threat but that our situation with Global Climate Change is the greatest threat to our existence.
Pachamama
(17,540 posts)And then came Trump....
dalton99a
(92,323 posts)jimmil
(641 posts)research was going on at MIT in the 70s when I was there. The long and the short of it is we don't even know how we learn much less how to make a machine learn. I am sure those days of 20K lines of LISP code to do nothing really have progressed but how creatures actually learn is still not understood.
PSPS
(15,221 posts)thucythucy
(9,043 posts)that will do us in.
Duppers
(28,468 posts)Gore1FL
(22,856 posts)I'd argue it's not artificial intelligence, but real stupidity we have to worry about most.
AllaN01Bear
(28,653 posts)Sunlei
(22,651 posts)armored alarm that shoots anything that moves, tiny tanks, planes, trucks & cars AI controlled. high power lasers from satellites
This can ALL be hacked or mistakes can be made. 'bugs' bugs bugs.
It will be hacked.
yallerdawg
(16,104 posts)"But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. They could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.
Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war."
SweetieD
(1,673 posts)could do a lot of harm via social media. In ways we can't imagine.
MurrayDelph
(5,723 posts)by Artificial Intelligence,
we'll be done in by Genuine Stupidity.
Nitram
(27,123 posts)mn9driver
(4,826 posts)He's talking about a very straightforward logical progression.
Homo Sapiens didn't become the dominant species on earth because we are stronger, faster or bigger. We aren't in the process of causing the 8th great extinction because we are evil.
We dominate at the expense of other life because we are smarter and we use that intelligence to adapt and exploit at a speed no other species can match. That's it.
If we create an intelligence that beats us in terms of speed, creativity and accuracy, it will inevitably become the dominant species. It's pretty obvious and pretty simple. Musk, Gates, Hawking and many others can see and accept that logic.
Response to Doug the Dem (Original post)
mn9driver This message was self-deleted by its author.
Stryst
(721 posts)All intelligence is artificial; it's an emergent property rising from the interactions in our nervous system. What Musk is afraid of is a machine intelligence.
And I don't think we can do anything to stop one from eventually being created. Musk and Hawking are afraid of an A.I. (their term) being created top down (the thing being purpose built in one piece) , while we're creating smarter and smaller computers, networking them together, and then we're all going to act surprised when an intelligence emerges from those complicated reactions.
How many devices in your home, right now, have a processor and connect either to the internet or your home router? How complicated of a machine brain are we developing with the internet, right now?
Binkie The Clown
(7,911 posts)Mass starvation as food crops fail due to global climate change will bring down technological civilization before AI reaches that point. As massive waves of climate refugees flood into the population centers of the world, the problem will become even more acute. Food riots, starvation, lack of potable water, and in that weakened state, disease and violence will deliver the coup de grâce to those pitiful survivors of the first waves of starvation and war. AI is no threat at all once the electric grid collapses.
ucrdem
(15,720 posts)In which case he ought to know.
https://www.nytimes.com/2016/09/15/business/fatal-tesla-crash-in-china-involved-autopilot-government-tv-says.html?mcubz=0
gilbert sullivan
(192 posts)Whatever eventually emerges as "artificial" intelligence will be an inevitable product of evolution which of course is
"natural"...or as close to it as anything can really be. If it essentially replaces humans, so what?, it's not as if we did much
with what we allegedly gained when some guy ate an apple...
hunter
(40,392 posts)I think that's ultimately what's bothering Musk and Hawkings. In their vision of the future humans will be spreading throughout the solar system like some kind of plague even if the earth itself becomes uninhabitable to humans because of runaway global warming, nuclear war, hit by a huge asteroid, etc. (I'm such a pessimist I suspect that any such asteroid would have been sent on that collision course with earth by the humans living in space.) If the earth becomes uninhabitable to humans it will most likely be humans who made it so.
Personally, I doubt humans will ever have any significant presence in outer space beyond low earth orbit. It's simply too hostile an environment for our biology. Visitors to the International Space Station are somewhat protected from high energy particles by earth's magnetic field. The Apollo astronauts, who were beyond earth's magnetosphere for just a few days, probably suffered significant damage caused by high speed particles and hard radiation ripping through their bodies. This radiation was so intense they could see it passing through their heads as flashes of light.
Mars ain't no place to raise a kid; in fact it's cold as hell. And radioactive. And poisonous. But we've demonstrated we can build robots that survive in that harsh environment. At some point we might be building robots smart enough that we can relate to them the same way we'd relate to human explorers. Tell us what it's like there... Some AI might even be able to relate their experience in poetry and song.
Should humanity survive the next thousand years space will belong to our intellectual children, not our biologic children. If we don't survive, it won't be anything surprising. This planet has seen many innovative species grow exponentially and then crash and fade away into extinction. We are not the first, and we won't be the last.
I'm someone who believes the universe is full of life, but not in a Star Trek way. Faster-than-light travel and time travel are simply not possible in this universe. Any intelligent life that has successfully spread beyond their planet of origin is inaccessible to us, living in universes of their own creation or in aspects of this universe beyond human perception.
As for economic disruptions caused by AI, that's not a technical problem, it's a problem of our primitive beliefs, racism, nationalism, and destructive work ethics. This thing we now call economic "productivity" isn't productivity at all, in fact it's a direct measure of the damage we are doing to what's left of the earth's natural environment and our own human spirit.
A Universal Basic Income would be one approach to economic disruptions caused by AI. Free education would be another. That's one a bit of Star Trek futurism that's possible today, just as Star Trek cell phones and tablets became possible.
yallerdawg
(16,104 posts)AI is possibly our next step in the evolutionary process. Our "intelligence and humanity" moving past carbon-based "lifeform" to something more durable and potentially immortal. "The ghost in the machine."
Of course, evolution tends to be a rather destructive force on the less "successful" ancestral species.
6000eliot
(5,643 posts)andym
(6,053 posts)Ai is able to out perform humans in many specialized tasks already-- human jobs that involve thought and analysis will be on the line in a few years. Meet your new AI accountant, or marketing analyst. Before that self-driving cars, trucks, automated restaurants and retail stores will end human blue collar jobs just as surely as cars ended the days of the horse and buggy. When AI achieves generalized intelligence, watch out all bets are off.
hunter
(40,392 posts)Our minds are a big toolbox, including tools for choosing which tools are appropriate in any given situation.
The failures of human cognition are every bit as spectacular as the failures of current AI technology.
Anyone who voted for Trump was using the wrong tools, as were all the people in "here, hold my beer," YouTube videos.
I also know that we humans are crazy to feel alone in this universe. We look for "intelligent" life in outer space even as we share (or don't share) the planet with other intelligent and sentient beings who have much more in common with us than any space alien ever could. We humans are idiots not to recognize that.
Turbineguy
(39,860 posts)of the GOP.
mdbl
(8,137 posts)right now we have neither.
Kablooie
(19,053 posts)The kind that thinks they are smart by worshipping Trump.
Owl
(3,763 posts)Shandris
(3,447 posts)I keep hearing all they scary "Ehrmagawd it's a Terminator!" stuff, and the scary stuff they shoved into roleplaying games (some of which were turned into legitimately interesting left-leaning games like Eclipse Phase), and the scary sci-fi classics (Hi, Dune!), but what I'm also seeing is that the groups warning that they're so dangerous are the those with the most to lose, wealth wise. I'd be somewhat interested in some more focus being put on that instead of "OH MY GAWD YOU'LL LOSE YOUR JOBS!!!!!!" without the first word of why jobs would even exist at that point.
The underlying assumptions must be challenged. AI is the canopener...or can be.
pablo_marmol
(2,375 posts)I won't comment on which side won the debate.
http://www.intelligencesquaredus.org/debates/artificial-intelligence-risks-could-outweigh-rewards
SweetieD
(1,673 posts)couldn't contemplate what a nuclear bomb might do to the way war is waged.
From articles I've read, I think we are right to be scared.
defacto7
(14,160 posts)Anyway, what's he afraid of? We're killing ourselves off pretty well without help from AI. I hope there IS an artificial intel that can do better than we did.