Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

xchrom

(108,903 posts)
Mon Apr 23, 2012, 07:28 AM Apr 2012

Homeland Security's 'Pre-Crime' Screening Will Never Work

http://www.theatlantic.com/technology/archive/2012/04/homeland-securitys-pre-crime-screening-will-never-work/255971/

Pre-crime prevention is a terrible idea.

Here is a quiz for you. Is predicting crime before it happens: (a) something out of Philip K. Dick's Minority Report; (b) the subject of of a Department of Homeland Security research project that has recently entered testing; (c) a terrible and dangerous idea which will inevitably be counter-productive and which will levy a high price in terms of civil liberties while providing little to no marginal security; or (d) all of the above.

If you picked (d) you are a winner!

The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection and prevention software which may come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let's assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations -- an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes -- which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.
6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Homeland Security's 'Pre-Crime' Screening Will Never Work (Original Post) xchrom Apr 2012 OP
There are people who failed lie detector tests while telling the truth. hobbit709 Apr 2012 #1
Was Zimmerman doing some precrime screening of his own? broiles Apr 2012 #2
I think the people running private and for-profit prisons would disagree. TalkingDog Apr 2012 #3
Heisenberg uncertainty principal zipplewrath Apr 2012 #4
How is this NOT profiling? Bruce Wayne Apr 2012 #5
"Everybody runs." Ellipsis Apr 2012 #6

hobbit709

(41,694 posts)
1. There are people who failed lie detector tests while telling the truth.
Mon Apr 23, 2012, 07:31 AM
Apr 2012

I doubt this idiotic idea will even be that accurate.

TalkingDog

(9,001 posts)
3. I think the people running private and for-profit prisons would disagree.
Mon Apr 23, 2012, 09:05 AM
Apr 2012

And they want to know why the Atlantic is being such a "Debbie Downer" about the whole thing.

zipplewrath

(16,646 posts)
4. Heisenberg uncertainty principal
Mon Apr 23, 2012, 09:19 AM
Apr 2012

It's not that the article is wrong, but it operates under some unstated assumptions. The false positive problem is only a "problem" if one responds to the filter in the wrong way. So you get a false positive, now what do you do? Secondary screening? Background check? Put a Marshall in the seat next to them? The false positives only become a problem if the response is to start operating as if these people are now "guilty" of something.

The real problem here is more along the line of the heisenberg uncertainty principal. Trying to detect something, alters it. Suppose you have a real, live, terrorist on your hand. Singling him out, or handling him differently may very likely cause him to abandon his plans. So you get into the problem of identifying potential terrorists, but not being able to do much about it except wait. He just has to keep trying until he isn't detected. Without a "perfect" system, he's not going to have to try too many times before he is successful at being detected. He might even run a few "dry runs" to establish his own patterns so he can sense if he is being singled out.

The real problem here is that there are precious few terrorist acts AT ALL. You are trying to detect something that functionally "never" happens. No matter how "perfect" a system is that you create, it will very likely never get the "chance" to work. And the data upon which it was created will most likely become obsolete before it ever gets the chance to "work".

Latest Discussions»General Discussion»Homeland Security's 'Pre-...