Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Roland99

(53,342 posts)
Thu Aug 1, 2019, 09:58 AM Aug 2019

Researchers, scared by their own work, hold back "deepfakes for text" AI

Researchers, scared by their own work, hold back “deepfakes for text” AI
OpenAI's GPT-2 algorithm shows machine learning could ruin online content for everyone.

https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/

The performance of the system was so disconcerting, now the researchers are only releasing a reduced version of GPT-2 based on a much smaller text corpus. In a blog post on the project and this decision, researchers Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever wrote:

Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Nearly a year ago we wrote in the OpenAI Charter: “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.


OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal "mafia"—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. Brockman now serves as OpenAI's CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for "fake news" publishers and potentially spread misinformation and undermine public debate, GPT-2's output certainly qualifies as concerning. Unlike other text generation "bot" models, such as those based on Markov chain algorithms, the GPT-2 "bot" did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student's report on the causes of the US Civil War.


We truly won’t be able to believe our eyes and ears one day....
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Researchers, scared by their own work, hold back "deepfakes for text" AI (Original Post) Roland99 Aug 2019 OP
And then there's this, too... Roland99 Aug 2019 #1
this is Big Brother..... Thomas Hurt Aug 2019 #2
There is only one solution: Doing away with anonymity on the internet. DetlefK Aug 2019 #3
Or, an independent third party authenticator service. harumph Aug 2019 #5
Integrating some sort of blockchain technology into all online publishing. Roland99 Aug 2019 #6
How does that help? hunter Aug 2019 #7
When all the powers that be 42bambi Aug 2019 #4

DetlefK

(16,423 posts)
3. There is only one solution: Doing away with anonymity on the internet.
Thu Aug 1, 2019, 10:07 AM
Aug 2019

If a story gets posted, it must be clear who posted it and who shall be held accountable if it turns out to be false.

harumph

(1,897 posts)
5. Or, an independent third party authenticator service.
Thu Aug 1, 2019, 10:54 AM
Aug 2019

Just spitballing ...
Legit news sources would be accompanied by a crypto token that could be authenticated through the third party.
Intelligent browsers could screen out stories/images/sound files... that lack the authenticated token - or mark them as "dubious."

There's probably a business case for that.

There are ways to screen out deceptions - as are also ways to prevent robocalling - but b/c it's time consuming and not considered "profitable" - it will likely have to be mandated by regulation.

If things are fucked up - it's because someone somewhere is making $$ on the fuckedupedness.

hunter

(38,309 posts)
7. How does that help?
Thu Aug 1, 2019, 11:25 AM
Aug 2019

How do I know some random Joe in Missouri isn't an AI in Russia? How do I know Mitch McConnell isn't a robot?

If an AI's blathering is consistent, how is that any different than the blathering of, say, our asshole president or any hate radio troll?

What eliminating anonymity would do is make it much harder for people to tell their own truths, especially in places or situations where such honesty might cost a person their job, their freedom, or even their life.

42bambi

(1,753 posts)
4. When all the powers that be
Thu Aug 1, 2019, 10:14 AM
Aug 2019

perfectly programs our brains, this won't make any difference. Our masters will guide us to where ever they/it want to take us...but who/what will be our masters. I'll leave it right there....

Latest Discussions»General Discussion»Researchers, scared by th...