General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsElon Musk and Mark Zuckerberg Clash Over Future of Artificial Intelligence
Two of the technology industrys most powerful leaders are at odds when it comes to artificial intelligence.
Tesla Motors CEO Elon Musk has a pessimistic view of the risks associated with such technology. I keep sounding the alarm bell, Musk told the National Governors Assn. in June. But until people see robots going down the street killing people, they dont know how to react because it seems so ethereal.
Facebook Chief Mark Zuckerberg on Sunday called Musks dire warnings overblown and described himself as optimistic.
People who are naysayers and try to drum up these doomsday scenarios I dont understand it, Zuckerberg said while taking questions via a Facebook Live broadcast. Its really negative, and in some ways, I actually think it is pretty irresponsible.
Musk shot back on Twitter early Tuesday, saying Zuckerberg was out of his element. Ive talked to Mark about this, Musk wrote. His understanding of the subject is limited.
?uuid=b5254af2-714c-11e7-ac86-9c8e992d421e
http://www.latimes.com/business/hollywood/la-fi-zuckerberg-ai-20170725-story.html
Kentonio
(4,377 posts)Which isn't massively surprising, considering his fame and fortune is the result of making a website.
HAB911
(8,880 posts)gordianot
(15,237 posts)longship
(40,416 posts)SWBTATTReg
(22,112 posts)Been coding for very long time, and it's constantly getting easier and easier to implement new code without interruption of ongoing activities (do in background, write/code, compile, link-edit, load, execute new stuff).
But the rationale of writing 'intelligent' code has for a long time, and was primarily done to save money on ongoing maintenance costs of the code. Our limits, of course, were software (limits of then), hardware (amount of dasd required, etc.), and budgets/money.
By the time we realize that we have an AI problem, it's going to be way too late.
longship
(40,416 posts)That is all this AI hand wringing is about.
Don't get me started about Ray Kurzweil's utter Singularity lunacy, yet another Matrix wet dream. (You can learn about it by spending thousands to attend Kurzweil's Singularity University, just like Trump University only different.)
Kentonio
(4,377 posts)There are basically two danger scenarios here.
1) Real AI. There has been serious talk about developing AI that improves itself rather than relying on human iteration. This is a horrible idea, because it could lead to an AI that suddenly explodes far beyond the capabilities of human thought and once that happens we're likely to be basically fucked. We'd be the chimps to the AI Einstein, and that's optimistic. This is the scenario that people tend to brush off as sci-fi, despite it being not nearly as sci-fi as people may imagine.
But then we have scenario 2..
2) Complex Problem Solving AI. This isn't much further than what we have today, AI routines that are given fairly basic rule sets and required to find solutions. The theory here is that as AI becomes more capable, there's a strong likelihood that it will be entrusted with increasingly complex systems to manage. For anyone who has ever coded though, you'll know that for a human to predict all the possible edge cases for how a complex routine may perform is incredibly difficult. If you introduce unpredictable variables, then it basically becomes impossible. The danger therefore is not a thinking AI that will decide it wants to kill everybody, but a fairly simple AI that is trying to carry out its programmed tasks but ends up causing a massive and potentially genocidal incident simply because it came up with a solution that we couldn't predict.
Even if you think that scenario 1 is a 'Matrix wet dream' the threat of scenario 2 is extremely real and highly dangerous. Personally I'm not writing off scenario 1 either, given that our technology is expanding exponentially not linearly, and the merging of very new and highly powerful new technologies makes it impossible to predict even 10 years from now with any accuracy.
Anyway, it's better to plan for an outcome you don't need, than face a potentially species ending emergency you have no preparation or defence for.
longship
(40,416 posts)And along the way, one might explain how digital technology can somehow model human neurology with any fidelity. I suspect that it is all Matrix wet dream rubbish.
I would gladly admit my mistake when the Matrix (or Skynet) becomes reality. Not holding my breath. Hollywood is not reality, no matter how entertaining.
I'll be back!
Kentonio
(4,377 posts)It's generally assumed to be an artificially created intelligence which is capable of sentient thought. The problem is though that we don't really know what sentient thought is. The simple definition is capable of feeling, but what does that even mean? Does it even matter? We look at everything from such a human-centric viewpoint that we can't even clearly define whether most animals are really sentient.
The famous Turing Test was devised to answer this question of whether a machine could be intelligent and because of the sheer difficulty in defining thought, he fell back on basically 'can it imitate a human so well you couldn't tell the difference.
When I saw you'd asked about modeling human neurology, my first thought was simply 'why would we?'. We've evolved in ways that are almost certainly hugely inefficient because of the way evolution works. We don't know what makes us capable of free thought (if such a thing even exists), we don't know what feelings are besides the links we can draw to our evolutionary past, and all we really know is that our brains are a highly complex electro-biological computer. As machine computers get more and more advanced (and don't forget we're now researching biological and quantum forms of computing), it seems inevitable that in the very near future we'll surpass the capabilities of our own brains. Then the only question becomes what software we load it with.
TLDR: we're probably not as special as we think we are.
longship
(40,416 posts)That is nothing but a SciFi wet dream. As is worrying about some dystopic robot nightmare future.
Hell, neuroscience cannot even say what consciousness is! How is one going to model what nobody understands? The best we can say is that it seems to be an emergent property of the brain.
Kentonio
(4,377 posts)longship
(40,416 posts)I prefer to ignore fictional narratives on these discussions.
When one ignores sentience, I don't know of any threats. And I DO ignore machine sentience because it is utter Kurzweilian rubbish.
Response to longship (Reply #16)
Kentonio This message was self-deleted by its author.
Saboburns
(2,807 posts)Regarding 'Taking over the Earth' and 'Wiping out a species via Genicide'.
These are not machine traits, nor, that I am aware, traits of any other Earth specie's.
No, taking over large areas and wiping out other species, and indeed different races of our own species, are traits of Homo Sapiens.
I guess it could be argued that since humans program the machines we could somehow infect them with our own faults, which could lead to disaster. But, hey, that wouldn't be the machines fault now would it?
What an Ironic case that would make, eh??
Kentonio
(4,377 posts)What value would a human life have to a machine trying to solve a problem? Would 'a million people live there' be a significant factor to a machine tasked with making a city function more efficiently? Would a machine feel any need to relocate people, if killing them was materially cheaper?
Humans can be assholes, but its worth remembering that we have a lot of rather nice traits too that aren't present in a lot of nature or logic.
Johonny
(20,833 posts)no offense to Musk, but killing people isn't really the giant societal change coming to a future near you.
Last edited Wed Jul 26, 2017, 01:12 PM - Edit history (1)
And anyone foolish enough to think the population that can still find jobs will pay for the part of the population who can't is living in a dream. Those who still have employment will slowly turn against those who don't have employment as their numbers grow and they'll do things like deny them the right to vote and then they'll vote in people that will slash any help to them.
All that will happen is starvation and death for those replaced by bots. Which with driverless vehicles could be a lot (trucking industry) and service industry. It's an ugly ugly future if we don't deal with it now.