Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
LudwigPastorius
LudwigPastorius's Journal
LudwigPastorius's Journal
September 27, 2022
More here: https://www.theguardian.com/books/2022/sep/27/denver-riggleman-book-republicans-paul-gosar-louie-gohmert
Ex-GOP Congressman Riggleman: Gosar and Gohmert have 'serious cognitive issues'
I'd say that assessment applies to a majority of the current Republicans in office.
But, yeah. If you go on camera complaining that "if you're a Republican, you can't even lie to Congress or to an FBI agent", there is something seriously wrong with you.
The Republican congressmen Louis Gohmert and Paul Gosar adopted such extreme, conspiracy-tinged positions, even before the US Capitol attack, that a fellow member of the rightwing Freedom Caucus thought they may have had serious cognitive issues.
-snip-
Riggleman describes one meeting in which Gohmert promoted a conspiracy theory related to master algorithms, saying he suspected there was a secret technology shadow-banning conservatives across all platforms.
-snip-
Riggleman says Gosar and Gohmert seemed to be joined at the brain stem when it came to their eagerness to believe wild, dramatic fantasies about Democrats, the media and big tech.
-snip-
Riggleman also calls Gosar a blatant white supremacist, describing him and the Iowa Republican Steve King making a case for white supremacy over pulled pork and ribs.
-snip-
Riggleman describes one meeting in which Gohmert promoted a conspiracy theory related to master algorithms, saying he suspected there was a secret technology shadow-banning conservatives across all platforms.
-snip-
Riggleman says Gosar and Gohmert seemed to be joined at the brain stem when it came to their eagerness to believe wild, dramatic fantasies about Democrats, the media and big tech.
-snip-
Riggleman also calls Gosar a blatant white supremacist, describing him and the Iowa Republican Steve King making a case for white supremacy over pulled pork and ribs.
More here: https://www.theguardian.com/books/2022/sep/27/denver-riggleman-book-republicans-paul-gosar-louie-gohmert
September 16, 2022
For example, the Paperclip Maximizer thought experiment:
https://www.lesswrong.com/tag/paperclip-maximizer
Oxford researchers: Superintelligent AI is "likely" to cause an existential catastrophe for humanity
https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanityThe most successful AI models today are known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program is trying to generate a picture (or sentence) from input data, and a second part is grading its performance. What the new paper proposes is that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity.
Under the conditions we have identified, our conclusion is much stronger than that of any previous publicationan existential catastrophe is not just possible, but likely, Cohen said on Twitter in a thread about the paper.
"In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. Losing this game would be fatal, the paper says. These possibilities, however theoretical, mean we should be progressing slowlyif at alltoward the goal of more powerful AI.
Under the conditions we have identified, our conclusion is much stronger than that of any previous publicationan existential catastrophe is not just possible, but likely, Cohen said on Twitter in a thread about the paper.
"In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. Losing this game would be fatal, the paper says. These possibilities, however theoretical, mean we should be progressing slowlyif at alltoward the goal of more powerful AI.
For example, the Paperclip Maximizer thought experiment:
https://www.lesswrong.com/tag/paperclip-maximizer
Profile Information
Gender: MaleMember since: Sat Jan 21, 2017, 12:51 AM
Number of posts: 9,137