General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsResearchers, scared by their own work, hold back "deepfakes for text" AI
Researchers, scared by their own work, hold back deepfakes for text AI
OpenAI's GPT-2 algorithm shows machine learning could ruin online content for everyone.
https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/
Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Nearly a year ago we wrote in the OpenAI Charter: we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research, and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.
OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal "mafia"Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. Brockman now serves as OpenAI's CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technologyideally moving it away from potentially harmful applications.
Given present-day concerns about how fake content has been used to both generate money for "fake news" publishers and potentially spread misinformation and undermine public debate, GPT-2's output certainly qualifies as concerning. Unlike other text generation "bot" models, such as those based on Markov chain algorithms, the GPT-2 "bot" did not lose track of what it was writing about as it generated output, keeping everything in context.
For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student's report on the causes of the US Civil War.
We truly wont be able to believe our eyes and ears one day....
Roland99
(53,342 posts)Thomas Hurt
(13,903 posts)DetlefK
(16,423 posts)If a story gets posted, it must be clear who posted it and who shall be held accountable if it turns out to be false.
harumph
(1,897 posts)Just spitballing ...
Legit news sources would be accompanied by a crypto token that could be authenticated through the third party.
Intelligent browsers could screen out stories/images/sound files... that lack the authenticated token - or mark them as "dubious."
There's probably a business case for that.
There are ways to screen out deceptions - as are also ways to prevent robocalling - but b/c it's time consuming and not considered "profitable" - it will likely have to be mandated by regulation.
If things are fucked up - it's because someone somewhere is making $$ on the fuckedupedness.
Roland99
(53,342 posts)Incl Social media
hunter
(38,309 posts)How do I know some random Joe in Missouri isn't an AI in Russia? How do I know Mitch McConnell isn't a robot?
If an AI's blathering is consistent, how is that any different than the blathering of, say, our asshole president or any hate radio troll?
What eliminating anonymity would do is make it much harder for people to tell their own truths, especially in places or situations where such honesty might cost a person their job, their freedom, or even their life.
42bambi
(1,753 posts)perfectly programs our brains, this won't make any difference. Our masters will guide us to where ever they/it want to take us...but who/what will be our masters. I'll leave it right there....