A new initiative from Facebook will provide aid organizations with location data for users in affected areas, such as where people are marking themselves safe and from where they are fleeing. It shows the immense potential of this kind of fine-grained tracking, but inescapably resurfaces questions of just what else the company could do with the data.
In a blog post and video, Facebook’s public policy research manager Molly Jackman describes the new “disaster maps” that will be provided to UNICEF, the International Red Cross and Red Crescent, and the World Food Program to start.
Essentially there will be three types of information provided — sourced, presumably, from GPS and other location indicators provided whenever a Facebook user does just about anything.
Location density maps will provide rough but recent estimations of where people are distributed, which can be compared to earlier such maps or other measures.
Movement maps show how users changed locations, from neighborhood to neighborhood or city to city, and when — this could help direct urgent resources to ad hoc gathering spots or bottlenecks.
Safety Check maps show where people have marked themselves safe; if they’re clustering in one area, that may be indicate the limits of a flood’s effects, or a region undamaged by an earthquake.
One can easily imagine how useful this type of up-to-date information could be to anyone trying to figure out where to send fresh water, set up emergency shelters and so on. Of course, the data is aggregated and de-identified, and the partner organizations must “respect our privacy standards,” Jackman wrote.
But on the other hand, when Facebook flexes its data muscles for good like this, suspicious minds begin to think along other lines.
What other kinds of population movements does Facebook track, perhaps to better inform advertisers? How can users know if their location is being roped into this kind of experimental data silo? Does this include information gleaned from Facebook plug-ins and cookies? Who has access to the data, and at what level of granularity and de-identification?
The company is under no formal obligation to explain itself, and I don’t want to suggest any ill purpose exactly. But it’s always troubling when, given a tool that can very easily be used for both humanitarian purposes and self-serving ones, we only hear about the former.
As usual, unless Facebook and other major data hoarders like Google specifically say they aren’t using your data for something, it’s best to assume that they are.