Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Nevilledog

(51,194 posts)
Sat Jul 16, 2022, 06:19 PM Jul 2022

These robots were trained on AI. They became racist and sexist.

https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/

No paywall
https://archive.ph/dyqxD

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples’ faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

*snip*

22 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
These robots were trained on AI. They became racist and sexist. (Original Post) Nevilledog Jul 2022 OP
White supremacy produces white supremacy. WhiskeyGrinder Jul 2022 #1
True point is true irisblue Jul 2022 #3
Now Republicans will push for a robotic (AI)... Buckeye_Democrat Jul 2022 #2
Its frightening, but I think we're already well down that path. LT Barclay Jul 2022 #11
Ahhhh shit. They've found a way to program racism. brush Jul 2022 #4
Coming soon to the border.... With guns attached. Nevilledog Jul 2022 #5
point being - they didn't HAVE to program stopdiggin Jul 2022 #12
No, machines do what's programmed into them. brush Jul 2022 #13
that's kind of missing the point of the article stopdiggin Jul 2022 #14
and if your point was to paint this as stopdiggin Jul 2022 #15
Machines learned it on their own? Oh boy. Not buying that. brush Jul 2022 #16
Stopdigging Is right. The robots were not directly programmed to treat nonwhites differently. Solomon Jul 2022 #19
They were influenced by what was done/learned before. brush Jul 2022 #20
There's s big difference and I don't understand why you can't see it. Solomon Jul 2022 #21
I don't understand why you can't see that society and people in society... brush Jul 2022 #22
"Garbage in, garbage out". Dates from 1957. muriel_volestrangler Jul 2022 #6
+1 gulliver Jul 2022 #7
Exactly. nt crickets Jul 2022 #8
I'm curious what Google thinks about this paper. Mosby Jul 2022 #9
This has been known for a while drmeow Jul 2022 #10
So depressing. Sky Jewels Jul 2022 #17
What is input is what will be output. 🤔 😔 nt Raine Jul 2022 #18

LT Barclay

(2,606 posts)
11. Its frightening, but I think we're already well down that path.
Sat Jul 16, 2022, 08:17 PM
Jul 2022

DARPA has never viewed a Sci-fi movie as a warning, just inspiration.
So apparently their working on Sky-net, T1 and T2 terminators, and the BORG. I'm sure Robocop inspired droids will just be step one.

brush

(53,841 posts)
4. Ahhhh shit. They've found a way to program racism.
Sat Jul 16, 2022, 06:28 PM
Jul 2022

This stuff is never going end. Next they'll do those scary robot dogs that can run and climb stairs and never get tired to chase down random Black people. Brown people and LGBTQ+ are next.

And don't think some racist administrator in charge won't make it happen, as it already happens in police training programs.

Dystopia is right around the corner if the magat repubs take over Congress and the WH as they've already got SCOTUS and control the broken Senate even though they're the minority. Thanks to two you know whos.

stopdiggin

(11,358 posts)
12. point being - they didn't HAVE to program
Sat Jul 16, 2022, 08:40 PM
Jul 2022

the racism. The machines just learned it on their own.

(source material - meaning pretty much everything we do - was corrupted. but not in any way intentionally.)

brush

(53,841 posts)
13. No, machines do what's programmed into them.
Sat Jul 16, 2022, 09:20 PM
Jul 2022

If the programmer has racist intents, the robot will act as programmer.

stopdiggin

(11,358 posts)
14. that's kind of missing the point of the article
Sun Jul 17, 2022, 12:35 AM
Jul 2022

where the essence of the whole exercise - was that neither programmer or program (which would apply only in the loosest sort of sense here) had any intent, racial or otherwise, at all. And further that the machines were coming up with some of these 'judgements', i.e. bias, on their own. That is more or less the basis for what AI means (machine learning). And the twist here is - the machines are no longer just doing what they are told (programmed) to do. (I know that's what we've been spoon fed for years, but ... Brave new world.)

The team of researchers studying AI in robots, which included members from the University of Washington and the Technical University of Munich in Germany, trained virtual robots on CLIP, a large language artificial intelligence model created and unveiled by OpenAI last year.

The popular model, which visually classifies objects, is built by scraping billions of images and text captions from the internet. While still in its early stages, it is cheaper and less labor intensive for robotics companies to use versus creating their own software from scratch, making it a potentially attractive option.

The researchers gave the virtual robots 62 commands. When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than White men, the study showed. When identifying “criminals,” Black men were chosen 9 percent more often than White men. In actuality, scientists said, the robots should not have responded, because they were not given information to make that judgment.

stopdiggin

(11,358 posts)
15. and if your point was to paint this as
Sun Jul 17, 2022, 12:40 AM
Jul 2022

in any way deliberate in intent
(which appears to be kind of the slant, in refuting my original post ...)
Then you're just flat out wrong.

brush

(53,841 posts)
16. Machines learned it on their own? Oh boy. Not buying that.
Sun Jul 17, 2022, 01:16 AM
Jul 2022

Could it be that previous input data/programming and trending decisions from that data influences decisions that machines make?

Solomon

(12,319 posts)
19. Stopdigging Is right. The robots were not directly programmed to treat nonwhites differently.
Sun Jul 17, 2022, 02:06 AM
Jul 2022

They learned it on their own by processing billions of bits of images and attitudes about whites and nonwhites as depicted in our society.

Kind of like the same as subconscious in humans. Constantly daily bombardment of positive images and stories about whites, and negative images and stories about nonwhite people.

brush

(53,841 posts)
20. They were influenced by what was done/learned before.
Sun Jul 17, 2022, 02:15 AM
Jul 2022

Done/learned/programmed, what's the difference?

Kids learned and make decisions from what their parents input into their brains. One can certainly substitute program for input in that sentence.

Call that digging if you want, so be it. But it's a big way people learn to make judgments and decisions.

And we're to believe robots don't do the same?

Still not buying it.

Solomon

(12,319 posts)
21. There's s big difference and I don't understand why you can't see it.
Sun Jul 17, 2022, 09:14 AM
Jul 2022

You don't have to be directly programmed or taught to be racist when you live in a society where every magazine, tv show, newspaper and other forms of medium depict black and brown people in a negative light.

Its why so many white people believe they are not racist when they are. They were not directly taught (i.e. programmed) to be, but the constant drone of the system sinks it into the subconsciousness.

For example, remember Katrina? Magazines showed pictures of white people saying "they found food" while the same magazine would show pictures of black people "looting".

If you can't see the difference, I don't know what to tell you. It reminds me of the way a lot of people deny racism when we are practically swimming in it.

brush

(53,841 posts)
22. I don't understand why you can't see that society and people in society...
Sun Jul 17, 2022, 12:25 PM
Jul 2022

and events that happen in society influences how individuals act, react and make decisions for themselves and towards others. What is that, that influence? Is it just influence, or could you call it societal programming?

Not rocket science.

muriel_volestrangler

(101,361 posts)
6. "Garbage in, garbage out". Dates from 1957.
Sat Jul 16, 2022, 06:41 PM
Jul 2022

And, as a concept, from before working computers were made.

The expression was popular in the early days of computing. The first known use is in a 1957 syndicated newspaper article about US Army mathematicians and their work with early computers,in which an Army Specialist named William D. Mellin explained that computers cannot think for themselves, and that "sloppily programmed" inputs inevitably lead to incorrect outputs. The underlying principle was noted by the inventor of the first programmable computing device design:

On two occasions I have been asked, "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.


— Charles Babbage, Passages from the Life of a Philosopher

https://en.wikipedia.org/wiki/Garbage_in,_garbage_out
Latest Discussions»General Discussion»These robots were trained...