Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

JI7

(89,247 posts)
Wed Jul 26, 2017, 04:56 AM Jul 2017

Elon Musk and Mark Zuckerberg Clash Over Future of Artificial Intelligence

Two of the technology industry’s most powerful leaders are at odds when it comes to artificial intelligence.

Tesla Motors CEO Elon Musk has a pessimistic view of the risks associated with such technology. “I keep sounding the alarm bell,” Musk told the National Governors Assn. in June. “But until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.”

Facebook Chief Mark Zuckerberg on Sunday called Musk’s dire warnings overblown and described himself as “optimistic.”

“People who are naysayers and try to drum up these doomsday scenarios — I don’t understand it,” Zuckerberg said while taking questions via a Facebook Live broadcast. “It’s really negative, and in some ways, I actually think it is pretty irresponsible.”

Musk shot back on Twitter early Tuesday, saying Zuckerberg was out of his element. “I’ve talked to Mark about this,” Musk wrote. “His understanding of the subject is limited.”

?uuid=b5254af2-714c-11e7-ac86-9c8e992d421e
http://www.latimes.com/business/hollywood/la-fi-zuckerberg-ai-20170725-story.html

17 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
 

Kentonio

(4,377 posts)
1. Yep, Zuckerberg sounds like the one out of his depth.
Wed Jul 26, 2017, 05:01 AM
Jul 2017

Which isn't massively surprising, considering his fame and fortune is the result of making a website.

SWBTATTReg

(22,112 posts)
3. By the time we realize that we have an AI problem, it's going to be way too late.
Wed Jul 26, 2017, 06:25 AM
Jul 2017

Been coding for very long time, and it's constantly getting easier and easier to implement new code without interruption of ongoing activities (do in background, write/code, compile, link-edit, load, execute new stuff).

But the rationale of writing 'intelligent' code has for a long time, and was primarily done to save money on ongoing maintenance costs of the code. Our limits, of course, were software (limits of then), hardware (amount of dasd required, etc.), and budgets/money.

By the time we realize that we have an AI problem, it's going to be way too late.

longship

(40,416 posts)
6. Some folks are living a Matrix wet dream.
Wed Jul 26, 2017, 08:10 AM
Jul 2017

That is all this AI hand wringing is about.

Don't get me started about Ray Kurzweil's utter Singularity lunacy, yet another Matrix wet dream. (You can learn about it by spending thousands to attend Kurzweil's Singularity University, just like Trump University only different.)

 

Kentonio

(4,377 posts)
8. No, its about not walking with our eyes closed into something that could kill millions of people
Wed Jul 26, 2017, 09:51 AM
Jul 2017

There are basically two danger scenarios here.

1) Real AI. There has been serious talk about developing AI that improves itself rather than relying on human iteration. This is a horrible idea, because it could lead to an AI that suddenly explodes far beyond the capabilities of human thought and once that happens we're likely to be basically fucked. We'd be the chimps to the AI Einstein, and that's optimistic. This is the scenario that people tend to brush off as sci-fi, despite it being not nearly as sci-fi as people may imagine.

But then we have scenario 2..

2) Complex Problem Solving AI. This isn't much further than what we have today, AI routines that are given fairly basic rule sets and required to find solutions. The theory here is that as AI becomes more capable, there's a strong likelihood that it will be entrusted with increasingly complex systems to manage. For anyone who has ever coded though, you'll know that for a human to predict all the possible edge cases for how a complex routine may perform is incredibly difficult. If you introduce unpredictable variables, then it basically becomes impossible. The danger therefore is not a thinking AI that will decide it wants to kill everybody, but a fairly simple AI that is trying to carry out its programmed tasks but ends up causing a massive and potentially genocidal incident simply because it came up with a solution that we couldn't predict.

Even if you think that scenario 1 is a 'Matrix wet dream' the threat of scenario 2 is extremely real and highly dangerous. Personally I'm not writing off scenario 1 either, given that our technology is expanding exponentially not linearly, and the merging of very new and highly powerful new technologies makes it impossible to predict even 10 years from now with any accuracy.

Anyway, it's better to plan for an outcome you don't need, than face a potentially species ending emergency you have no preparation or defence for.

longship

(40,416 posts)
11. Please define "real AI".
Wed Jul 26, 2017, 10:06 AM
Jul 2017

And along the way, one might explain how digital technology can somehow model human neurology with any fidelity. I suspect that it is all Matrix wet dream rubbish.

I would gladly admit my mistake when the Matrix (or Skynet) becomes reality. Not holding my breath. Hollywood is not reality, no matter how entertaining.

I'll be back!


 

Kentonio

(4,377 posts)
13. What is real AI? That's actually a staggeringly difficult question
Wed Jul 26, 2017, 01:19 PM
Jul 2017

It's generally assumed to be an artificially created intelligence which is capable of sentient thought. The problem is though that we don't really know what sentient thought is. The simple definition is capable of feeling, but what does that even mean? Does it even matter? We look at everything from such a human-centric viewpoint that we can't even clearly define whether most animals are really sentient.

The famous Turing Test was devised to answer this question of whether a machine could be intelligent and because of the sheer difficulty in defining thought, he fell back on basically 'can it imitate a human so well you couldn't tell the difference.

When I saw you'd asked about modeling human neurology, my first thought was simply 'why would we?'. We've evolved in ways that are almost certainly hugely inefficient because of the way evolution works. We don't know what makes us capable of free thought (if such a thing even exists), we don't know what feelings are besides the links we can draw to our evolutionary past, and all we really know is that our brains are a highly complex electro-biological computer. As machine computers get more and more advanced (and don't forget we're now researching biological and quantum forms of computing), it seems inevitable that in the very near future we'll surpass the capabilities of our own brains. Then the only question becomes what software we load it with.

TLDR: we're probably not as special as we think we are.

longship

(40,416 posts)
14. AI has nothing to do with sentient thought.
Wed Jul 26, 2017, 03:48 PM
Jul 2017

That is nothing but a SciFi wet dream. As is worrying about some dystopic robot nightmare future.

Hell, neuroscience cannot even say what consciousness is! How is one going to model what nobody understands? The best we can say is that it seems to be an emergent property of the brain.

longship

(40,416 posts)
16. That, however, is not the standard narrative.
Thu Jul 27, 2017, 04:12 AM
Jul 2017

I prefer to ignore fictional narratives on these discussions.

When one ignores sentience, I don't know of any threats. And I DO ignore machine sentience because it is utter Kurzweilian rubbish.


Response to longship (Reply #16)

Saboburns

(2,807 posts)
7. Isn't it strange how we hunans project our own weaknesses upon machines??
Wed Jul 26, 2017, 09:03 AM
Jul 2017

Regarding 'Taking over the Earth' and 'Wiping out a species via Genicide'.

These are not machine traits, nor, that I am aware, traits of any other Earth specie's.

No, taking over large areas and wiping out other species, and indeed different races of our own species, are traits of Homo Sapiens.

I guess it could be argued that since humans program the machines we could somehow infect them with our own faults, which could lead to disaster. But, hey, that wouldn't be the machines fault now would it?

What an Ironic case that would make, eh??

 

Kentonio

(4,377 posts)
9. Machines also wouldn't have human weaknesses like compassion unless we programmed it
Wed Jul 26, 2017, 09:55 AM
Jul 2017

What value would a human life have to a machine trying to solve a problem? Would 'a million people live there' be a significant factor to a machine tasked with making a city function more efficiently? Would a machine feel any need to relocate people, if killing them was materially cheaper?

Humans can be assholes, but its worth remembering that we have a lot of rather nice traits too that aren't present in a lot of nature or logic.

Johonny

(20,833 posts)
10. The main problem with AI and robots is...massive unemployment
Wed Jul 26, 2017, 10:04 AM
Jul 2017

no offense to Musk, but killing people isn't really the giant societal change coming to a future near you.

EllieBC

(3,013 posts)
12. THIS!!!!
Wed Jul 26, 2017, 10:26 AM
Jul 2017

Last edited Wed Jul 26, 2017, 01:12 PM - Edit history (1)

And anyone foolish enough to think the population that can still find jobs will pay for the part of the population who can't is living in a dream. Those who still have employment will slowly turn against those who don't have employment as their numbers grow and they'll do things like deny them the right to vote and then they'll vote in people that will slash any help to them.

All that will happen is starvation and death for those replaced by bots. Which with driverless vehicles could be a lot (trucking industry) and service industry. It's an ugly ugly future if we don't deal with it now.

Latest Discussions»General Discussion»Elon Musk and Mark Zucker...