When fans arrived at the Boston Calling music festival last year, their tickets were scanned, bags were checked and, unbeknownst to them, images of their faces were scanned and entered into a police database. Attendees, without knowing it, had volunteered their identities to the authorities as part of a surveillance test project that let Boston police track their movements throughout the festival, comparing their social media activity in real time.
Like police in other cities, Boston authorities are examining the best way to use technology to keep the public safe. But facial recognition -- in no small part because of the privacy implications, questions about data storage and uncertainty around investigation techniques -- has created more questions than it has answered.
Music fans descended on Boston Calling May 25, 2013, barely more than a month after two homemade explosives detonated at the Boston Marathon, killing three people and injuring more than 260 others. The manhunt for the bombers lasted days and stretched across much of the city, leaving Bostonians, and the police department especially, anxious about another potentially even more devastating attack.
Documents obtained by Dig Boston, an alternative weekly newspaper, show the police used Boston Calling as a trial run for “Face Capture.” The plan was to use technology purchased from IBM’s Smarter Cities initiative to identify “every person” attending the concert, with the documents defining a “party of interest” “as anyone who walks through the door.”
The software identifies people based on physical traits like skin tone, amount of hair, eyeglasses and facial hair, as well as torso dimensions. Those variants, slides seen by Vice News indicate, would then be transmitted to a city hub where police and city officials keep watch on social media. The supplemental efforts would notify authorities for instances such as “when a person loiters near a doorway as they would if trying to gain entrance,” if there were “attempts to climb perimeter barricade,” or an identified object had been left near a barricade, as a bomb might be.
Polls have consistently found Americans are overwhelmingly in favor of regulated video surveillance on public places, especially when police are working to curb the threat of terrorism. A Rasmussen Reports survey of 1,000 U.S. adults conducted in the weeks after the Boston Marathon bombings found 70 percent of respondants were in favor of public security cameras while 18 percent were against it and 11 percent were undecided.
“I know some people are paranoid about the government intruding on their privacy,” retired teacher Judith Richards told the New York Times after participating in the poll. “But with all the horrible things that have been happening, I think you have to trust this as a way to protect our well-being.”
Yet the Boston Marathon investigation also underscored how unreliable facial recognition systems still are. Investigators had multiple images of the Tsarnaev brothers surveying the Boston Marathon carrying black bags but were unable to connect that information with their immigration status, driver’s license photos and a previous FBI investigation into the elder brother. That police took days to find a connection is proof, experts say, no such ominous government blacklist actually exists, at least not an accurate one.
“The work was painstaking and mind-numbing: One agent watched the same segment of video 400 times,” the Washington Post reported, revealing more traditional methods led to the identification of the Tsarnaev brothers. “The goal was to construct a timeline of images, following possible suspects as they moved along the sidewalks, building a narrative out of a random jumble of pictures from thousands of different phones and cameras.”
While Boston police didn’t break any laws with the Face Capture effort, various reports have complained about the number music fans used as guinea pigs for what someday could be an expansive government surveillance project. Facebook, for example, previously announced its DeepFace facial recognition system is capable of determining with 97 percent accuracy whether two images are of the same person. The company, which itself is accustomed to criticism that it views users as guinea pigs, is able is make such accurate identifications because of the network of images from which it draws, something that could take police agencies a decade or more to build up.
“It’s going to get better and better. As it does, it’s not just the FBI, CIA and government agencies, but also every shopping mall you go into, potentially sports arenas,” Kate Crockford, director of the American Civil Liberties Union of Massachusetts' Technology for Liberty Project, told Vice journalist Luke O’Neil.
“It’s going to be a lot like dystopian scenes in the mall in the film ‘Minority Report. We really need to get a handle on what exactly government agencies are doing. Not just thinking about it, but actually acting on public concerns about how this technology is going to be used against us, and actually passing laws that restrict some of the ways.”