Post by: Kim Stephens
One valid concern that almost always comes up when discussing social media with an audience of emergency managers is: “How do we keep up with all of this information in a crisis?” The sheer volume of social media content can be overwhelming, especially after a disaster. For example, after the flooding crisis in Australia just one of the Queensland Police Service’s facebook posts received 11,000 comments–they regularly had over a 1,000 comments on each of their posts during the height of the crisis. As another example, immediately after the earthquake in Japan, a Mashable article stated that the number of tweets from Toyko topped 1,200 per minute.
The fear is obvious, how can emergency managers, already stretched during a crisis, analyze that volume of information in order to 1. Ensure that an important piece is not missed; 2. Enhance situational awareness, and 3. Respond in a timely manner to citizens that ask direct questions through these social media platforms. The analogy of drinking from a fire hose is applicable.
During the Social Media Chat (#SMEMchat) this past Friday (archived here by the wonderful @EmrgncyTraffic) we were discussing a related problem, too many social media platforms. FEMA Administrator Craig Fugate tweeted:
This concept of using computer processing or geospatial search tools to map feedback is currently being developed by many different people and organizations. All of these types of applications usually involve a way to sort, analyze and then place the social media data on a visualization tool (e.g. a map). Although Ushahidi and Project EPIC were two of first, others are entering the marketplace, including the GeoVista Center of Penn State University. They have recently built a tool they are calling SensePlace2. Currently, this version only supports the processing of tweets. Here’s their description: “a geovisual analytics application that forages place-time-attribute information from the Twitterverse and supports crisis management through visually-enabled sensemaking with the information derived.”
Outsourcing to Trusted Agents
Other SMEM chats have focused on how to get help with social media data processing, and one of suggestion is to outsource this task to a predetermined group of “trusted agents,” as stated by Chris Hall aka TheFireTracker2. This could be a group of people that are well-known to you personally, such as the local CERT team, or it could be people you have come to trust but have never met (e.g. I have never met TheFireTracker IRL–in real life) .
The Virtual Operations Support Team (VOST) is an idea that is currently in the development stage and is designed around this concept of outsourcing. The brainchild of Jeff Philips, a SMEM thought-leader, the VOST is designed to help with the tasks of monitoring, archiving and cataloging social media content during a crisis. One great thing about monitoring social media is that you do not have to be physically co-located with emergency response personnel; monitoring can take place from anywhere, including from a home personal computer. The team Jeff has assembled, via social media of course, have been testing this concept during conferences with large amounts of twitter traffic (such as the 140 conference). The team is currently collaborating to write a White Paper describing the details of the concept. I’m looking forward to its publication.
These are just a few examples of solutions to this data processing quandary. I’ll be writing about some other new tools in the coming weeks.