Post by: Kim Stephens
Crowdsourcing for disaster response and recovery has been a hot topic since the 2010 earthquake in Haiti. In fact, Google the term “Haiti earthquake crowdsourcing” and you’ll get 132,000 results. But, mention the word to local or state emergency managers and you are likely to elicit instant anxiety: How can the crowd be utilized without overwhelming “official” responders? Dr. Patrick Meier recently described this fear on his blog iRevolution:
While the majority of emergency management centers do not create the demand for crowdsourced crisis information, members of the public are increasingly demanding that said responders monitor social media for “emergency posts”. But most responders fear that opening up social media as a crisis communication channel with the public will result in an unmanageable flood of requests…
At the Federal level, however, crowdsourcing is not only familiar–it has recently been embraced very publicly by FEMA. For instance, they used the power of the crowd during the aftermath of Hurricane Sandy to help review images of damage.
The Civil Air Patrol (CAP) were taking over 35,000 GPS-tagged images in fly-overs of damage-affected areas. This was performed as part of their mandate to provide aerial photographs for disaster assessment and response agencies, primarily to FEMA, who used the aggregate geolocated data for situational awareness. The scale of the destruction meant that there was a relatively large amount of photographs for a single disaster. As a result, it was the first time that CAP and FEMA used distributed third-party information processing for the damage assessment. (source: http://idibon.com/crowdsourced-hurricane-sandy-response/)
Just this summer, FEMA added a new feature to their mobile application that also is considered crowdsourcing. The app includes the ability for people to submit images of damage, which are then aggregated and placed on a publicly available map. Their efforts have received quite a lot of media attention–see: FEMA App Adds Crowdsourcing for Disaster Relief.
However, when talking about crowdsourcing, I often find that it is important to break how the crowd is utilized into categories: the FEMA examples above describe two very different uses of the crowd based on two different objectives. A great new report by the IBM Center for The Business of Government by Dr. Daren C. Brabham, of the Annenberg School for Communication and Journalism at the University of Southern California, finds that there are actually four categories of crowdsouring, and the type chosen should be dependent upon the desired outcome. The report isn’t specific to emergency management, but it does mention some familiar programs, such as the USGS: “Did you feel it?”
Below is their summary. You can downloaded the report here: Using Crowdsourcing In Government.
The growing interest in “engaging the crowd” to identify or develop innovative solutions to public problems has been inspired by similar efforts in the commercial world. There, crowdsourcing has been successfully used to design innovative consumer products or solve complex scientific problems, ranging from custom-designed T-shirts to mapping genetic DNA strands.
The Obama administration, as well as many state and local governments, have been adapting these crowdsourcing techniques with some success. This report provides a strategic view of crowdsourcing and identifies four specific types:
- Type 1: Knowledge Discovery and Management. Collecting knowledge reported by an on-line community, such as the reporting of earth tremors or potholes to a central source.
- Type 2: Distributed Human Intelligence Tasking. Distributing “micro-tasks” that require human intelligence to solve, such as transcribing handwritten historical documents into electronic files.
- Type 3: Broadcast Search. Broadcasting a problem-solving challenge widely on the internet and providing an award for solution, such as NASA’s prize for an algorithm to predict solar flares
- Type 4: Peer-Vetted Creative Production. Creating peer-vetted solutions, where an on-line community both proposes possible solutions and is empowered to collectively choose among the solutions.
By understanding the different types, which require different approaches, public managers will have a better chance of success. Dr. Brabham focuses on the strategic design process rather than on the specific technical tools that can be used for crowdsourcing. He sets forth ten emerging best practices for implementing a crowdsourcing initiative.
What do you think? Is your organization interested in using crowdsourcing anytime soon? Which category would best fit your desired objectives?
thanks for the mention of Idibon’s work hosting the crowdsourced Aerial imagery analysis for FEMA and CAP following Sandy. We have competed pro-bono work with several organizations around disaster-related information processing, and see great progress and innovation. We keep these private to avoid the detrimental hype cycles but are transparent with after-action reports like the ones for Sandy.
Speaking of hype, I ran most of the crowdsourcing efforts in Haiti, too. I would encourage your readers to read our report:
The crowdsourcing response to the earthquake was a majority Haitian initiative and social media did not play an important role. The web results you mention are mostly because some of our (non-Haitian) partners published information about at-risk individuals online after being repeatedly asked not to.
Please also check out a recent post on my personal site about how private disaster preparedness and response initiatives are being misrepresented as open, and the dangers this carries:
Thanks again for the mention of Idibon’s work, and please do encourage your readers/followers to avoid open social media for information about at-risk individuals.
FEMA also uses crowdsourcing for input into its plans and guidance, eg, the new Quadrennial report.
Yes Claire–that is another good example!
Thanks Robert for your comment. I look forward to reading your posts. I find your perspective quite interesting.
Pingback: Crowdsourcing, Digital Volunteers, and Policy: New Workshop Summary from the Wilson Center | idisaster 2.0