Category Archives: Crowdsourcing

Crowdsourcing, Digital Volunteers, and Policy: New Workshop Summary from the Wilson Center

Post by: Kim Stephens

English: Woodrow Wilson International Center f...

English: Woodrow Wilson International Center for Scholars Español: Woodrow Wilson International Center for Scholars (Photo credit: Wikipedia)

A year ago this month the Commons Lab, part of the Wilson Center’s  Science & Technology Innovation Program, hosted a workshop with the goal of  “bringing together emergency responders, crisis mappers, researchers, and software programmers to discuss issues surrounding the adoption of… new technologies.”  The discussions included an in-depth review of crowdsourcing, specifically the use–as well as the reluctance, to use digital technology teams to aid in both message dissemination as well as data aggregation. The 148 page report from that meeting was released yesterday and is titled:  “Use of Mass Collaboration in Disaster Management” with a  focus on “opportunities and challenges posed by social media and other collaborative technologies.”

The Executive Summary states:

Factors obstructing the adoption of crowdsourcing, social media, and digital volunteerism approaches often include uncertainty about accuracy, fear of liability, inability to translate research into operational decision-making, and policy limitations on gathering and managing data. Prior to the workshop, many in the formal response community assumed that such obstructions are insurmountable and, therefore, that the approaches could not be adopted by the response community. However, it became clear during the workshop that these approaches are already being integrated into disaster response strategies at various scales. From federal agencies to local emergency managers, officials have begun exploring the potential of the technologies available. Stories of success and failure were common, but out of both came policy, research, and technological implications. Panelists shared strategies to overcome barriers where it is appropriate, but resisted change in areas where policy barriers serve a meaningful purpose in the new technological environment.

…Workshop participants identified the following activities as some of the more urgent research priorities:

  • Creating durable workflows to connect the information needs of on-the-ground responders, local and federal government decision-makers, and researchers, allowing each group to benefit from collaboration;
  • Developing methods and processes to quickly validate and verify crowdsourced data;
  • Establishing best practices for integrating crowdsourced and citizen-generated data with authoritative datasets, while also streamlining this integration;
  • Deciding on the criteria for “good” policies and determining which policies need to be adapted or established, in addition to developing ways for agencies to anticipate rapid technological change;
  • Determining where government agencies can effectively leverage social networking, crowdsourcing, and other innovations to augment existing information or intelligence and improve decision-making (and determining where it is not appropriate).

Curious about the best use of crowdsourcing? Read this report.

Post by: Kim Stephens

crowdsourcingCrowdsourcing for disaster response and recovery has been a hot topic since the 2010 earthquake in Haiti. In fact, Google the term “Haiti earthquake crowdsourcing” and you’ll get 132,000 results.  But, mention the word to local or state emergency managers and you are likely to elicit instant anxiety: How can the crowd be utilized without overwhelming “official” responders? Dr. Patrick Meier recently described this fear on his blog iRevolution:

While the majority of emergency management centers do not create the demand for crowdsourced crisis information, members of the public are increasingly demanding that said responders monitor social media for “emergency posts”. But most responders fear that opening up social media as a crisis communication channel with the public will result in an unmanageable flood of requests…

At the Federal level, however, crowdsourcing is not only familiar–it has recently been embraced very publicly by FEMA. For instance, they used the power of the crowd during the aftermath of Hurricane Sandy to help review images of damage.

The Civil Air Patrol (CAP) were taking over 35,000 GPS-tagged images in fly-overs of damage-affected areas. This was performed as part of their mandate to provide aerial photographs for disaster assessment and response agencies, primarily to FEMA, who used the aggregate geolocated data for situational awareness. The scale of the destruction meant that there was a relatively large amount of photographs for a single disaster. As a result, it was the first time that CAP and FEMA used distributed third-party information processing for the damage assessment. (source: http://idibon.com/crowdsourced-hurricane-sandy-response/)

Just this summer, FEMA added a new feature to their mobile application that also is considered crowdsourcing. The app includes the ability for people to submit images of damage, which are then aggregated and placed on a publicly available map.  Their efforts have received quite a lot of media attention–see: FEMA App Adds Crowdsourcing for Disaster Relief.

However, when talking about crowdsourcing, I often find that it is important to break how the crowd is utilized into categories: the FEMA examples above describe two very different uses of the crowd based on two different objectives.  A great new report by the IBM Center for The Business of Government by Dr. Daren C. Brabham, of the Annenberg School for Communication and Journalism at the University of Southern California, finds that there are actually four categories of crowdsouring, and the type chosen should be dependent upon the desired outcome.   The report isn’t specific to emergency management, but it does mention some familiar programs, such as the USGS: “Did you feel it?”

Below is their summary. You can downloaded the report here: Using Crowdsourcing In Government.

The growing interest in “engaging the crowd” to identify or develop innovative solutions to public problems has been inspired by similar efforts in the commercial world.  There, crowdsourcing has been successfully used to design innovative consumer products or solve complex scientific problems, ranging from custom-designed T-shirts to mapping genetic DNA strands.

The Obama administration, as well as many state and local governments, have been adapting these crowdsourcing techniques with some success.  This report provides a strategic view of crowdsourcing and identifies four specific types:

  • Type 1:  Knowledge Discovery and Management. Collecting knowledge reported by an on-line community, such as the reporting of earth tremors or potholes to a central source.
  • Type 2:  Distributed Human Intelligence Tasking. Distributing “micro-tasks” that require human intelligence to solve, such as transcribing handwritten historical documents into electronic files.
  • Type 3:  Broadcast Search. Broadcasting a problem-solving challenge widely on the internet and providing an award for solution, such as NASA’s prize for an algorithm to predict solar flares
  • Type 4:  Peer-Vetted Creative Production. Creating peer-vetted solutions, where an on-line community both proposes possible solutions and is empowered to collectively choose among the solutions.

By understanding the different types, which require different approaches, public managers will have a better chance of success.  Dr. Brabham focuses on the strategic design process rather than on the specific technical tools that can be used for crowdsourcing.  He sets forth ten emerging best practices for implementing a crowdsourcing initiative.

What do you think? Is your organization interested in using crowdsourcing anytime soon? Which category would best fit your desired objectives?

One County’s Social Media Stats: Hurricane Sandy

Post by: Kim Stephens

Fairfax County, VA’s Office of Public Affairs published their Social Media “Metrics Report” which provides a quantitative assessment of how well their social media presence was received during Hurricane Sandy (October 26-31 specifically). One of the more interesting components is the comparison to their social media numbers during Hurricane Irene, a big event for the Northern Virginia and Washington DC area.

Three items from this report stood out to me:

1. 384,651 Blog views to their “Fairfax CountyEmergency Information Blog.” That number is up from “just” 51,000 views during Hurricane Irene. How did they do it? They simply posted information people needed. For example, I personally linked to one of their blogs posts “What to do if a tree hits your house” on the Facebook page I was helping administer during the storm. One citizen commented: “Thanks for posting this, I was wondering what to do if that happened.” (I’d like to point out that this kind of blog post could be written in advance.)

According to their stats, people found their way to the blog from many different sources,  illustrating the concept of an integrated social media ecosystem. Specifically, people found the blog via  Facebook and  Twitter, but also from the the Fairfax County website, as well as from the local news station’s website.

2. Their Ushahidi map trial was well received. They state the purpose of the mapping effort in the report:

“During Hurricane Sandy, we introduced two new mapping options for our community: a road closures map that we updated with hourly status changes and a crowdsource reporting map for people to submit what they were seeing to give us better situational awareness.”

How did it go? It went well enough that I’m guessing they will be expanding mapping efforts in future disaster events.  Road closures were a good choice to use in this trial because not only are they very dynamic data points, but are often one of the most asked about issues on social media sites during  and immediately after a storm. Their crowdsourced map had almost 13,000 views with 111 crowdsourced data points, and the road closure map had 16, 473 views.

3. Facebook is still a big player. After Hurricane Irene I was impressed that Fairfax County had 879  “Likes” (meaning the number of people who “liked” specific posts and comments, not the number of fans of the page). However, that pales in comparison to the 10,175 “Likes” they received during Sandy. They reached over 127, 254 people “virally” every day during this six-day period. A “viral reach” simply means citizens were re-sharing the Fairfax County content on their own Facebook pages. This type of viral content sharing should be a goal of every public safety organization. Why? Although it seems backwards, people often head warnings and take content more seriously if they receive it from friends versus government agencies.

What were your numbers? Are you tracking them? (I do realize that at time of writing this event is far from over for too many people.)

Related articles

Hurricane Sandy: Fairfax County, VA’s Crowdmap

Post by: Kim Stephens

We are used to seeing volunteers stand up maps that allow both reporting and viewing of citizen-generated situational information. But for Hurricane Sandy, Fairfax County, Virginia  Office of Emergency Management has jumped on the crowd-mapping bandwagon. In fact, this is one of the few “official”  crowdmaps I’ve seen in the United States. Most emergency management organizations are very leery of citizen generated content. I often hear EMs state: “What if people report wrong information? We will be held liable?” or “What if  people expect emergency services to show up since we are announcing that we are collecting this content?”  The list goes on and on. Fairfax County, the social media rockstars that they are, have decided the benefits outweigh the concerns.

They do, however, address some of these issues by stating prominently on the page:

“PLEASE READ: This reporting system is NOT a replacement for 9-1-1. If you are experiencing an emergency or need to officially report an incident, please call 9-1-1 or the public safety non-emergency number at 703-691-2131, TTY 711. This reporting system is a new tool we’re testing, so we do not expect it will be comprehensive. We will monitor your reports. If we see something significant you share, we will share it with emergency responders/planners. This will give us a selected sense of what’s happening across Fairfax County as a result of Hurricane Sandy.”

Post Hurricane Sandy,  I’ll be very interested to hear how well this platformed performed for them; for example, if they were able to obtain information about what was happening (downed trees, flooded roads and traffic lights out) more quickly than they would have otherwise. Nonetheless, I think it is a great step in the direction of openness and inclusiveness–no matter what it’s operational utility proves to be.

What is Crisis Mapping?

Post by: Kim Stephens

I recently had a conversation with a colleague, who is very well versed in social media and emergency management, asking me to explain crisis mapping.  I am not an expert in that topic, but Jen Ziemke, the co-founder of the International Network of Crisis Mappers, now assistant professor at John Carroll University, and fellow at the Harvard Humanitarian Initiative, certainly is. Her presentation at Notre Dame University on the use of crowdsourcing and digital mapping for humanitarian response to the 2010 earthquake in Haiti was recorded and I have embedded that presentation below. As described on CrisisMappers.net:

She also covered how crisis mapping is being used in a wide variety of contexts, including for election monitoring and tracking of pro-democracy initiatives. This event was co-sponsored by University of Notre Dame’s Center for Social Concerns, Interdisciplinary Center for Network Science & Applications (iCeNSA), and the Master of Science in Global Health Program of the Eck Institute for Global Health.

Her presentation describes very clearly the concept and its application during disasters and humanitarian crises in first 8 minutes, however, I do recommend viewing it in its entirety.

See also: What Role Does a Crisis Mapper Play? 

The Social Media Tag Challenge: Crowdscanner describes how they won

Post by: Kim Stephens

On March 31st, the US State Department sponsored a game called  “Tag Challenge” that took social media monitoring to a new level.  It was designed by graduate students from six countries, “…the result of a series of conferences on social media and transatlantic security.”

They constructed a task that would be impossible for one person to complete: find 5 “jewel thieves”  in 5 cities across the globe in one day, photograph them, and upload the image.  The winning team, an MIT affiliated group which dubbed themselves “Crowdscanner,” was only able to find 3 of the 5 individuals, however, much was learned about how loosely connected distributed networks can be incentivized to solve a problem.

“The project demonstrates the international reach of social media and its potential for cross-border cooperation,” said project organizer Joshua deLara. “Here’s a remarkable fact: a team organized by individuals in the U.S., the U.K and the United Arab Emirates was able to locate an individual in Slovakia in under eight hours based only on a photograph.”

I had the pleasure of interviewing one of the Crowdscanner team leaders, Dr. Manuel Cebrian of the University of California, San Diego (who also led a team that won the DARPA Red Balloon Challenge in 2009). What stood out to me from our conversation was his emphasis on their incentive structure versus the social media tools. The networking tools were simply the means to the end, but the structure of the reward incentive, which was born out by strong micro-economic theory, was absolutely fundamental to their success.

Another interesting component to the challenge was the interaction between the competing teams, which I found in background information provided by Dr. Cebrian.  Some rival teams actually attacked Crowdscanner on twitter with tweets questioning their competence and encouraging people not to support them. As the challenge period came to a close, these attacks became increasingly desperate–even mentioning that Crowdscanner was not from DC and therefore shouldn’t win. That team emphasized that they were “playing for charity,” which the Crowdscanner team noted “…even though it was clearly not in line with their vitriolic attitude towards us.”

How this competing team used twitter to find information also provides a lesson:

[The other team's] strategy for spreading awareness consisted of their Twitter account… surfing trending hashtags, and tweet-spamming many individuals, social, governmental and private organizations in the target cities, often with an explicit plea for a retweet. The vast majority of these were ignored and, we believe, reduced their credibility.

Q: What does this challenge tell us about incentives and social mobilization? 

We used an incentive scheme that is designed to encourage two things simultaneously: (1) reporting to us if you found a target; (2) helping recruit other people to search for the target. Here’s how we described it: If we win, you will receive $500 if you upload an image of a suspect that is accepted by the challenge organizers. If a friend you invited using your individualized referral link uploads an acceptable image of a suspect, YOU also get $100. Furthermore, recruiters of the first 2000 recruits who signed up by referral get $1 for each recruit they refer to sign up with us (using the individualized referral link). See their webpage for more info on the design.

Graphic by Crowdscanner

The incentive to refer others is significant, since otherwise, you would actually rather keep the information to yourself, rather than inform your friends, since they would essentially compete with you over the prize. But by paying you for referring them also, the incentives change fundamentally.

Q: What tools were you using to monitor twitter?

Monitoring twitter was the smallest component. In fact,  monitoring  was the easy part, since the data is there to be sorted and analyzed. The biggest challenge was finding the non-twitter data: we had to infer how information was spread.

Q: Why did you all succeed?

We were able to succeed by leveraging a combination of social media and traditional media, and by building up a reputation as a credible, reliable team. Some competitors focused purely on social media, almost using Twitter exclusively to spread their message. This is not enough, as they became perceived as spammers. We were more selective in our Tweets and social media strategy, and I believe this gave us an edge.

Q: Do you think this model could work for finding real “jewel thieves” or high target terrorism suspects? 

Ransoms are complicated incentives. With traditional ransoms, once you have the information you have no incentive to recruit people to help you. Why would you team up?  So the question becomes, how can you structure it so that people are not greedy? We used the same incentive structure for the balloon challenge. These micro-economic models [and the way we employed them] demonstrate that people do recruit their friends, but only if they are provided the right incentive.  If you spread the word, then you get the money.

Q: So, why aren’t organizations using this distributed network model?

Centralized systems are inefficient but they are predictable. In a distributed system you have high efficiency but also have high unpredictability.

Gathering evidence is easy, doing justice is hard. We need to have models that make sense of the data. But currently,  we don’t have this kind of training. It is a new science: “network science” at most, a 10 year-old discipline, and only a few people that can make sense of it. It will take a while for us to be able to use these tools in any concerted way.

Related articles

What role does a volunteer “CrisisMapper” play?

JAROSLAV VALUCH / Standby Task Force

JAROSLAV VALUCH / Standby Task Force (Photo credit: SHAREconference)

Post by: Kim Stephens

It seems there has been a lot of conversations on the #SMEM (or Social Media and Emergency Management) twitter hashtag about using volunteers to help response organizations deal with the huge volume of information that comes from social networks during a crisis. (One conversation was this recent chat.)  Organizing those volunteers into a group with set expectations of what they will provide, and then integrating their work into the response effort,  are the logical next steps.

One organization doing just that is the Standby Task Force (SBTF).  They have set out to “…[turn] the adhoc groups of tech-savy mapping volunteers that emerge around crises into a flexible, trained and prepared network ready to deploy. The SBTF is a volunteer-based network that represents the first wave in Online Community Emergency Response Teams.”

The SBTF  was tasked by the United Nations in March-April, 2011 to provide sense-making to social media data during the ongoing crisis in Libya. Jen Ziemke posted this video to the Crisis Mapper’s blog of Helena Puig from SBTF discussing the  deployment during the ICCM conference .  I thought it really provided some great insights into what went well and what could be improved.

Another great resource, for those interested in the topic, is this google doc: Standby Task Force UN OCHA.  It is their After Action Report of the Libyan effort.