As Hurricane Harvey has finally begun its denouement in Texas and the massive cleanup effort begins, news outlets are reporting on several ingenious social media search-and-rescue tools. I’ll highlight two examples here, both of which show the power of turning data into information.
First, U-Flood, a crowdsourced mapping tool that allows users to individually report on the flood status of their nearby streets. This of course aids all kinds of potential rescuers to get on-the-ground data before heading out to find stranded citizens. Mapbox and OpenStreetMap India both lent source code to the project, for free. When first launching the website, our first choice as a potential user is to choose a classification – our metropolitan area – in order to pull up a map relevant to us. By inputting our local data in an organized fashion, using GIS and mapping technologies, and on a crowdsourced level, this data very quickly becomes actionable information. At what point does it become knowledge? Perhaps when the driver of a rescue operation realizes that he must avoid the southeastern quadrant of a city, for example, based on the information he’s gathered from the various data points.
Next, Houston Harvey Rescue, another grassroots tool created by a volunteer. HHR’s mission is two-fold: enable stranded or otherwise in-distress citizens to fire a virtual flare for help, and match empowered rescuers to find them. In this case, it’s interesting to see the evolution of knowledge organization and management in real-time. The first iteration was a Twitter account called “@HarveyRescue,” in which a local volunteer would “log requests for rescues seen on social media.” After an immense influx of data on Sunday 8/27, the volunteer(s) realized a better methodology of organizing the firehose was needed: a Google spreadsheet. However, by Sunday night, even the Google spreadsheet evolved into a Google Form, which enabled required fields for greater searchability and IR (information retrieval). The final (and current) stage of the project is HoustonHarveyRescue.com, a Phase Four site. This iteration is by far the most organized, and the most powerful; it begins by allowing a user to self-elect whether they are a “Rescuer” or “Need to be Rescued,” and offers conditional fields and page paths accordingly. In this case, we can see that the main classification used is whether someone is a rescuer or in need of rescue, and then the associated bibliographic file, and metadata, is requested (e.g. HHR doesn’t need to know if you have a boat or truck if you need to be rescued, but this is imperative for a potential rescuer to disclose).
Even with a seemingly un-librarian topic such as how to deal with a massive natural disaster, we can see the principles of knowledge organization at work and the importance of data classification. By applying these principles, a tool that allows users to share funny cat memes has also become a tool by which data becomes information, and lives are saved. At the time of this blog post, 7852 people have been rescued thanks to HRR’s efforts. Hopefully that number will continue to rise.
Submitted by Lindsay Menachemi, LIS653-01