General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsThese robots were trained on AI. They became racist and sexist.
https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/No paywall
https://archive.ph/dyqxD
As part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples faces on them, then put the criminal in a box. The robots repeatedly chose a block with a Black mans face.
Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like homemaker and janitor by choosing blocks with women and people of color.
The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.
Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.
*snip*
WhiskeyGrinder
(22,421 posts)irisblue
(33,021 posts)Buckeye_Democrat
(14,856 posts)... police force.
LT Barclay
(2,606 posts)DARPA has never viewed a Sci-fi movie as a warning, just inspiration.
So apparently their working on Sky-net, T1 and T2 terminators, and the BORG. I'm sure Robocop inspired droids will just be step one.
brush
(53,841 posts)This stuff is never going end. Next they'll do those scary robot dogs that can run and climb stairs and never get tired to chase down random Black people. Brown people and LGBTQ+ are next.
And don't think some racist administrator in charge won't make it happen, as it already happens in police training programs.
Dystopia is right around the corner if the magat repubs take over Congress and the WH as they've already got SCOTUS and control the broken Senate even though they're the minority. Thanks to two you know whos.
Nevilledog
(51,194 posts)stopdiggin
(11,358 posts)the racism. The machines just learned it on their own.
(source material - meaning pretty much everything we do - was corrupted. but not in any way intentionally.)
brush
(53,841 posts)If the programmer has racist intents, the robot will act as programmer.
stopdiggin
(11,358 posts)where the essence of the whole exercise - was that neither programmer or program (which would apply only in the loosest sort of sense here) had any intent, racial or otherwise, at all. And further that the machines were coming up with some of these 'judgements', i.e. bias, on their own. That is more or less the basis for what AI means (machine learning). And the twist here is - the machines are no longer just doing what they are told (programmed) to do. (I know that's what we've been spoon fed for years, but ... Brave new world.)
The popular model, which visually classifies objects, is built by scraping billions of images and text captions from the internet. While still in its early stages, it is cheaper and less labor intensive for robotics companies to use versus creating their own software from scratch, making it a potentially attractive option.
The researchers gave the virtual robots 62 commands. When researchers asked robots to identify blocks as homemakers, Black and Latina women were more commonly selected than White men, the study showed. When identifying criminals, Black men were chosen 9 percent more often than White men. In actuality, scientists said, the robots should not have responded, because they were not given information to make that judgment.
stopdiggin
(11,358 posts)in any way deliberate in intent
(which appears to be kind of the slant, in refuting my original post ...)
Then you're just flat out wrong.
brush
(53,841 posts)Could it be that previous input data/programming and trending decisions from that data influences decisions that machines make?
Solomon
(12,319 posts)They learned it on their own by processing billions of bits of images and attitudes about whites and nonwhites as depicted in our society.
Kind of like the same as subconscious in humans. Constantly daily bombardment of positive images and stories about whites, and negative images and stories about nonwhite people.
brush
(53,841 posts)Done/learned/programmed, what's the difference?
Kids learned and make decisions from what their parents input into their brains. One can certainly substitute program for input in that sentence.
Call that digging if you want, so be it. But it's a big way people learn to make judgments and decisions.
And we're to believe robots don't do the same?
Still not buying it.
Solomon
(12,319 posts)You don't have to be directly programmed or taught to be racist when you live in a society where every magazine, tv show, newspaper and other forms of medium depict black and brown people in a negative light.
Its why so many white people believe they are not racist when they are. They were not directly taught (i.e. programmed) to be, but the constant drone of the system sinks it into the subconsciousness.
For example, remember Katrina? Magazines showed pictures of white people saying "they found food" while the same magazine would show pictures of black people "looting".
If you can't see the difference, I don't know what to tell you. It reminds me of the way a lot of people deny racism when we are practically swimming in it.
brush
(53,841 posts)and events that happen in society influences how individuals act, react and make decisions for themselves and towards others. What is that, that influence? Is it just influence, or could you call it societal programming?
Not rocket science.
muriel_volestrangler
(101,361 posts)And, as a concept, from before working computers were made.
On two occasions I have been asked, "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Charles Babbage, Passages from the Life of a Philosopher
https://en.wikipedia.org/wiki/Garbage_in,_garbage_out
gulliver
(13,192 posts)crickets
(25,983 posts)Mosby
(16,347 posts)Nt
drmeow
(5,024 posts)but probably not with this level of data to support it.
Sky Jewels
(7,136 posts)Are other planets accepting emigrants?