General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forumstavernier
(12,410 posts)except that we will exterminate ourselves.
Nitram
(22,945 posts)Now who's laughing?
Hortensis
(58,785 posts)Please.
SergeStorms
(19,204 posts)the black goo that humans so greatly crave, will be our demise. Oil, petro-chemicals, and especially plastics, are killing animals, the oceans, and ultimately humans themselves. The water we drink is so polluted with micro-beads of plastic who knows what effects they'll have on the human body? Safe to say, it isn't going to be good.
There is no intelligent life on earth.
Plucketeer
(12,882 posts)There were NO guarantees for the creatures of this orb when they enjoyed their heydays here. Whether they died from Super-Volcanoes or Asteroids doesn't matter. For the most part they died off almost totally. But like the Phoenix, life arose from the wreckage and set about adapting (evolving) to the revised climate that had developed after the cataclysmic die-off.
The next big extinction event is going to happen - one way or another. DO WE have the powers to keep it from being a result of our inability to handle what we've wrought with our comprehensive brains, or will Yellowstone explode, or will a giant, potato-looking rock dumbly blunder it's way into a collision with our fragile, spinner of a home? That's why I say it's not our destiny to go out in a nuclear haze. It is an option that we can choose - but so is the slow-motion explosion we're in the midst of right now - global warming. Global warming has the potential to cause more (or at least as much as) death as a limited nuclear shootout. But again - don't discount the possibility of a big rock or a Yellowstone blowout. Both of those are possibilities we (unlike previous waves of life) can comprehend but can do nothing about. As to doing ourselves in - even if global solidarity and sanity broke out and infected every human on earth, there's still a fair chance we could literally stand and watch as our demise played out at NO ONE'S fault.
NeoGreen
(4,031 posts)...
PatrickforO
(14,602 posts)He was trapped in that disabled body for so long, and did so much good anyway that it is an inspiration for all of us.
Let us hope he is free now.
usaf-vet
(6,232 posts)Free from the confines of this world.
BlancheSplanchnik
(20,219 posts)Yes. Overpopulation. We cant continue as weve been going.
We will miss you Sir, Dr. Hawking. 😪💔
Hortensis
(58,785 posts)He's being mourned around the planet. Imagine being such a person.
"Some people would claim that things like love, joy and beauty belong to a different category from science and can't be described in scientific terms, but I think they can now be explained by the theory of evolution." Stephen Hawking
Who wasn't quite sure yet that time travel would be possible, but who we will for sure be visiting again if it is.
BlancheSplanchnik
(20,219 posts)I think we get to time travel as much as we want after we die. Maybe a visit with Dr Hawking-to go to a comedy show together! will be in order.
Hortensis
(58,785 posts)updated that half-century-old voice generator, but he said he never found a voice he liked better and it had become part of his identity. If anyone could find a way to have it in a next existence...
BlancheSplanchnik
(20,219 posts)Lol!
😄.
Hortensis
(58,785 posts)dreamland
(964 posts)dlk
(11,597 posts)poboy2
(2,078 posts)Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate, he reportedly told the BBC.
The development of full artificial intelligence could spell the end of the human race.
Could thinking machines take over?
I appreciate the issue of computers taking over (and one day ending humankind) being raised by someone as high profile, able and credible as Prof Hawking and it deserves a quick response.
The issue of machine intelligence goes back at least as far as the British code-breaker and father of computer science, Alan Turing in 1950, when he considered the question: Can machines think?
The issue of these intelligent machines taking over has been discussed in one way or another in a variety of popular media and culture. Think of the movies Colossus the Forbin project (1970) and Westworld (1973), and more recently Skynet in the 1984 movie Terminator and sequels, to name just a few.
Common to all of these is the issue of delegating responsibility to machines. The notion of the technological singularity (or machine super-intelligence) is something which goes back at least as far as artificial intelligence pioneer, Ray Solomonoff who, in 1967, warned:
Although there is no prospect of very intelligent machines in the near future, the dangers posed are very serious and the problems very difficult. It would be well if a large number of intelligent humans devote a lot of thought to these problems before they arise.
It is my feeling that the realization of artificial intelligence will be a sudden occurrence. At a certain point in the development of the research we will have had no practical experience with machine intelligence of any serious level: a month or so later, we will have a very intelligent machine and all the problems and dangers associated with our inexperience.
As well as giving this variant of Hawkings warning back in 1967, in 1985 Solomonoff endeavoured to give a time scale for the technological singularity and reflect on social effects.
I share the concerns of Solomonoff, Hawking and others regarding the consequences of faster and more intelligent machines but American author, computer scientist and inventor, Ray Kurzweil, is one of many seeing the benefits.
Whoever might turn out to be right (provided our planet isnt destroyed by some other danger in the meantime), I think Solomonoff was prescient in 1967 in advocating we devote a lot of thought to this.
Machines already taking over
In the meantime, we see increasing amounts of responsibility being delegated to machines. On the one hand, this might be hand-held calculators, routine mathematical calculations or global positioning systems (GPSs).
On the other hand, this might be systems for air traffic control, guided missiles, driverless trucks on mine sites or the recent trial appearances of driverless cars on our roads.
Humans delegate responsibility to machines for reasons including improving time, cost and accuracy. But nightmares that might occur regarding damage by, say a driverless vehicle, would include legal, insurance and attribution of responsibility.
It is argued that computers might take over when their intelligence supersedes that of humans. But there are also other risks with this delegation of responsibility.
-
http://www.iflscience.com/technology/stephen-hawking-right-could-ai-lead-end-humankind/
poboy2
(2,078 posts)Technological singularity
From Wikipedia, the free encyclopedia
The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]
Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]
Many notable personalities, including Stephen Hawking and Elon Musk, consider the uncontrolled rise of artificial intelligence as a matter of alarm and concern for humanity's future.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated by various intellectual circles.[citation needed]
-
https://en.wikipedia.org/wiki/Technological_singularity
bdamomma
(63,955 posts)and RIP you have contributed so much.
sweetroxie
(776 posts)We can't afford to lose any more great individuals. RIP, Dr. Hawking.
mitch96
(13,940 posts)Smart and funny too!
Heard this on the news..
And there are some weird coincidents about him...
He was born on the anniversary of Galileo's death. Big mind
Died on the same day as Albert Einstein was born.... another big mind
which is March 14.. aka 3/14 or Pi day!! a very big number
A fitting tribute to a physic's guy.. 3.1416.* ad nauseam
m