Category Archives: Crowdsourcing

Crowdsourcing, Digital Volunteers, and Policy: New Workshop Summary from the Wilson Center

Post by: Kim Stephens

English: Woodrow Wilson International Center f...

English: Woodrow Wilson International Center for Scholars Español: Woodrow Wilson International Center for Scholars (Photo credit: Wikipedia)

A year ago this month the Commons Lab, part of the Wilson Center’s  Science & Technology Innovation Program, hosted a workshop with the goal of  “bringing together emergency responders, crisis mappers, researchers, and software programmers to discuss issues surrounding the adoption of… new technologies.”  The discussions included an in-depth review of crowdsourcing, specifically the use–as well as the reluctance, to use digital technology teams to aid in both message dissemination as well as data aggregation. The 148 page report from that meeting was released yesterday and is titled:  “Use of Mass Collaboration in Disaster Management” with a  focus on “opportunities and challenges posed by social media and other collaborative technologies.”

The Executive Summary states:

Factors obstructing the adoption of crowdsourcing, social media, and digital volunteerism approaches often include uncertainty about accuracy, fear of liability, inability to translate research into operational decision-making, and policy limitations on gathering and managing data. Prior to the workshop, many in the formal response community assumed that such obstructions are insurmountable and, therefore, that the approaches could not be adopted by the response community. However, it became clear during the workshop that these approaches are already being integrated into disaster response strategies at various scales. From federal agencies to local emergency managers, officials have begun exploring the potential of the technologies available. Stories of success and failure were common, but out of both came policy, research, and technological implications. Panelists shared strategies to overcome barriers where it is appropriate, but resisted change in areas where policy barriers serve a meaningful purpose in the new technological environment.

…Workshop participants identified the following activities as some of the more urgent research priorities:

  • Creating durable workflows to connect the information needs of on-the-ground responders, local and federal government decision-makers, and researchers, allowing each group to benefit from collaboration;
  • Developing methods and processes to quickly validate and verify crowdsourced data;
  • Establishing best practices for integrating crowdsourced and citizen-generated data with authoritative datasets, while also streamlining this integration;
  • Deciding on the criteria for “good” policies and determining which policies need to be adapted or established, in addition to developing ways for agencies to anticipate rapid technological change;
  • Determining where government agencies can effectively leverage social networking, crowdsourcing, and other innovations to augment existing information or intelligence and improve decision-making (and determining where it is not appropriate).

Curious about the best use of crowdsourcing? Read this report.

Post by: Kim Stephens

crowdsourcingCrowdsourcing for disaster response and recovery has been a hot topic since the 2010 earthquake in Haiti. In fact, Google the term “Haiti earthquake crowdsourcing” and you’ll get 132,000 results.  But, mention the word to local or state emergency managers and you are likely to elicit instant anxiety: How can the crowd be utilized without overwhelming “official” responders? Dr. Patrick Meier recently described this fear on his blog iRevolution:

While the majority of emergency management centers do not create the demand for crowdsourced crisis information, members of the public are increasingly demanding that said responders monitor social media for “emergency posts”. But most responders fear that opening up social media as a crisis communication channel with the public will result in an unmanageable flood of requests…

At the Federal level, however, crowdsourcing is not only familiar–it has recently been embraced very publicly by FEMA. For instance, they used the power of the crowd during the aftermath of Hurricane Sandy to help review images of damage.

The Civil Air Patrol (CAP) were taking over 35,000 GPS-tagged images in fly-overs of damage-affected areas. This was performed as part of their mandate to provide aerial photographs for disaster assessment and response agencies, primarily to FEMA, who used the aggregate geolocated data for situational awareness. The scale of the destruction meant that there was a relatively large amount of photographs for a single disaster. As a result, it was the first time that CAP and FEMA used distributed third-party information processing for the damage assessment. (source: http://idibon.com/crowdsourced-hurricane-sandy-response/)

Just this summer, FEMA added a new feature to their mobile application that also is considered crowdsourcing. The app includes the ability for people to submit images of damage, which are then aggregated and placed on a publicly available map.  Their efforts have received quite a lot of media attention–see: FEMA App Adds Crowdsourcing for Disaster Relief.

However, when talking about crowdsourcing, I often find that it is important to break how the crowd is utilized into categories: the FEMA examples above describe two very different uses of the crowd based on two different objectives.  A great new report by the IBM Center for The Business of Government by Dr. Daren C. Brabham, of the Annenberg School for Communication and Journalism at the University of Southern California, finds that there are actually four categories of crowdsouring, and the type chosen should be dependent upon the desired outcome.   The report isn’t specific to emergency management, but it does mention some familiar programs, such as the USGS: “Did you feel it?”

Below is their summary. You can downloaded the report here: Using Crowdsourcing In Government.

The growing interest in “engaging the crowd” to identify or develop innovative solutions to public problems has been inspired by similar efforts in the commercial world.  There, crowdsourcing has been successfully used to design innovative consumer products or solve complex scientific problems, ranging from custom-designed T-shirts to mapping genetic DNA strands.

The Obama administration, as well as many state and local governments, have been adapting these crowdsourcing techniques with some success.  This report provides a strategic view of crowdsourcing and identifies four specific types:

  • Type 1:  Knowledge Discovery and Management. Collecting knowledge reported by an on-line community, such as the reporting of earth tremors or potholes to a central source.
  • Type 2:  Distributed Human Intelligence Tasking. Distributing “micro-tasks” that require human intelligence to solve, such as transcribing handwritten historical documents into electronic files.
  • Type 3:  Broadcast Search. Broadcasting a problem-solving challenge widely on the internet and providing an award for solution, such as NASA’s prize for an algorithm to predict solar flares
  • Type 4:  Peer-Vetted Creative Production. Creating peer-vetted solutions, where an on-line community both proposes possible solutions and is empowered to collectively choose among the solutions.

By understanding the different types, which require different approaches, public managers will have a better chance of success.  Dr. Brabham focuses on the strategic design process rather than on the specific technical tools that can be used for crowdsourcing.  He sets forth ten emerging best practices for implementing a crowdsourcing initiative.

What do you think? Is your organization interested in using crowdsourcing anytime soon? Which category would best fit your desired objectives?

One County’s Social Media Stats: Hurricane Sandy

Post by: Kim Stephens

Fairfax County, VA’s Office of Public Affairs published their Social Media “Metrics Report” which provides a quantitative assessment of how well their social media presence was received during Hurricane Sandy (October 26-31 specifically). One of the more interesting components is the comparison to their social media numbers during Hurricane Irene, a big event for the Northern Virginia and Washington DC area.

Three items from this report stood out to me:

1. 384,651 Blog views to their “Fairfax CountyEmergency Information Blog.” That number is up from “just” 51,000 views during Hurricane Irene. How did they do it? They simply posted information people needed. For example, I personally linked to one of their blogs posts “What to do if a tree hits your house” on the Facebook page I was helping administer during the storm. One citizen commented: “Thanks for posting this, I was wondering what to do if that happened.” (I’d like to point out that this kind of blog post could be written in advance.)

According to their stats, people found their way to the blog from many different sources,  illustrating the concept of an integrated social media ecosystem. Specifically, people found the blog via  Facebook and  Twitter, but also from the the Fairfax County website, as well as from the local news station’s website.

2. Their Ushahidi map trial was well received. They state the purpose of the mapping effort in the report:

“During Hurricane Sandy, we introduced two new mapping options for our community: a road closures map that we updated with hourly status changes and a crowdsource reporting map for people to submit what they were seeing to give us better situational awareness.”

How did it go? It went well enough that I’m guessing they will be expanding mapping efforts in future disaster events.  Road closures were a good choice to use in this trial because not only are they very dynamic data points, but are often one of the most asked about issues on social media sites during  and immediately after a storm. Their crowdsourced map had almost 13,000 views with 111 crowdsourced data points, and the road closure map had 16, 473 views.

3. Facebook is still a big player. After Hurricane Irene I was impressed that Fairfax County had 879  “Likes” (meaning the number of people who “liked” specific posts and comments, not the number of fans of the page). However, that pales in comparison to the 10,175 “Likes” they received during Sandy. They reached over 127, 254 people “virally” every day during this six-day period. A “viral reach” simply means citizens were re-sharing the Fairfax County content on their own Facebook pages. This type of viral content sharing should be a goal of every public safety organization. Why? Although it seems backwards, people often head warnings and take content more seriously if they receive it from friends versus government agencies.

What were your numbers? Are you tracking them? (I do realize that at time of writing this event is far from over for too many people.)

Related articles

Hurricane Sandy: Fairfax County, VA’s Crowdmap

Post by: Kim Stephens

We are used to seeing volunteers stand up maps that allow both reporting and viewing of citizen-generated situational information. But for Hurricane Sandy, Fairfax County, Virginia  Office of Emergency Management has jumped on the crowd-mapping bandwagon. In fact, this is one of the few “official”  crowdmaps I’ve seen in the United States. Most emergency management organizations are very leery of citizen generated content. I often hear EMs state: “What if people report wrong information? We will be held liable?” or “What if  people expect emergency services to show up since we are announcing that we are collecting this content?”  The list goes on and on. Fairfax County, the social media rockstars that they are, have decided the benefits outweigh the concerns.

They do, however, address some of these issues by stating prominently on the page:

“PLEASE READ: This reporting system is NOT a replacement for 9-1-1. If you are experiencing an emergency or need to officially report an incident, please call 9-1-1 or the public safety non-emergency number at 703-691-2131, TTY 711. This reporting system is a new tool we’re testing, so we do not expect it will be comprehensive. We will monitor your reports. If we see something significant you share, we will share it with emergency responders/planners. This will give us a selected sense of what’s happening across Fairfax County as a result of Hurricane Sandy.”

Post Hurricane Sandy,  I’ll be very interested to hear how well this platformed performed for them; for example, if they were able to obtain information about what was happening (downed trees, flooded roads and traffic lights out) more quickly than they would have otherwise. Nonetheless, I think it is a great step in the direction of openness and inclusiveness–no matter what it’s operational utility proves to be.

What is Crisis Mapping?

Post by: Kim Stephens

I recently had a conversation with a colleague, who is very well versed in social media and emergency management, asking me to explain crisis mapping.  I am not an expert in that topic, but Jen Ziemke, the co-founder of the International Network of Crisis Mappers, now assistant professor at John Carroll University, and fellow at the Harvard Humanitarian Initiative, certainly is. Her presentation at Notre Dame University on the use of crowdsourcing and digital mapping for humanitarian response to the 2010 earthquake in Haiti was recorded and I have embedded that presentation below. As described on CrisisMappers.net:

She also covered how crisis mapping is being used in a wide variety of contexts, including for election monitoring and tracking of pro-democracy initiatives. This event was co-sponsored by University of Notre Dame’s Center for Social Concerns, Interdisciplinary Center for Network Science & Applications (iCeNSA), and the Master of Science in Global Health Program of the Eck Institute for Global Health.

Her presentation describes very clearly the concept and its application during disasters and humanitarian crises in first 8 minutes, however, I do recommend viewing it in its entirety.

See also: What Role Does a Crisis Mapper Play? 

The Social Media Tag Challenge: Crowdscanner describes how they won

Post by: Kim Stephens

On March 31st, the US State Department sponsored a game called  “Tag Challenge” that took social media monitoring to a new level.  It was designed by graduate students from six countries, “…the result of a series of conferences on social media and transatlantic security.”

They constructed a task that would be impossible for one person to complete: find 5 “jewel thieves”  in 5 cities across the globe in one day, photograph them, and upload the image.  The winning team, an MIT affiliated group which dubbed themselves “Crowdscanner,” was only able to find 3 of the 5 individuals, however, much was learned about how loosely connected distributed networks can be incentivized to solve a problem.

“The project demonstrates the international reach of social media and its potential for cross-border cooperation,” said project organizer Joshua deLara. “Here’s a remarkable fact: a team organized by individuals in the U.S., the U.K and the United Arab Emirates was able to locate an individual in Slovakia in under eight hours based only on a photograph.”

I had the pleasure of interviewing one of the Crowdscanner team leaders, Dr. Manuel Cebrian of the University of California, San Diego (who also led a team that won the DARPA Red Balloon Challenge in 2009). What stood out to me from our conversation was his emphasis on their incentive structure versus the social media tools. The networking tools were simply the means to the end, but the structure of the reward incentive, which was born out by strong micro-economic theory, was absolutely fundamental to their success.

Another interesting component to the challenge was the interaction between the competing teams, which I found in background information provided by Dr. Cebrian.  Some rival teams actually attacked Crowdscanner on twitter with tweets questioning their competence and encouraging people not to support them. As the challenge period came to a close, these attacks became increasingly desperate–even mentioning that Crowdscanner was not from DC and therefore shouldn’t win. That team emphasized that they were “playing for charity,” which the Crowdscanner team noted “…even though it was clearly not in line with their vitriolic attitude towards us.”

How this competing team used twitter to find information also provides a lesson:

[The other team's] strategy for spreading awareness consisted of their Twitter account… surfing trending hashtags, and tweet-spamming many individuals, social, governmental and private organizations in the target cities, often with an explicit plea for a retweet. The vast majority of these were ignored and, we believe, reduced their credibility.

Q: What does this challenge tell us about incentives and social mobilization? 

We used an incentive scheme that is designed to encourage two things simultaneously: (1) reporting to us if you found a target; (2) helping recruit other people to search for the target. Here’s how we described it: If we win, you will receive $500 if you upload an image of a suspect that is accepted by the challenge organizers. If a friend you invited using your individualized referral link uploads an acceptable image of a suspect, YOU also get $100. Furthermore, recruiters of the first 2000 recruits who signed up by referral get $1 for each recruit they refer to sign up with us (using the individualized referral link). See their webpage for more info on the design.

Graphic by Crowdscanner

The incentive to refer others is significant, since otherwise, you would actually rather keep the information to yourself, rather than inform your friends, since they would essentially compete with you over the prize. But by paying you for referring them also, the incentives change fundamentally.

Q: What tools were you using to monitor twitter?

Monitoring twitter was the smallest component. In fact,  monitoring  was the easy part, since the data is there to be sorted and analyzed. The biggest challenge was finding the non-twitter data: we had to infer how information was spread.

Q: Why did you all succeed?

We were able to succeed by leveraging a combination of social media and traditional media, and by building up a reputation as a credible, reliable team. Some competitors focused purely on social media, almost using Twitter exclusively to spread their message. This is not enough, as they became perceived as spammers. We were more selective in our Tweets and social media strategy, and I believe this gave us an edge.

Q: Do you think this model could work for finding real “jewel thieves” or high target terrorism suspects? 

Ransoms are complicated incentives. With traditional ransoms, once you have the information you have no incentive to recruit people to help you. Why would you team up?  So the question becomes, how can you structure it so that people are not greedy? We used the same incentive structure for the balloon challenge. These micro-economic models [and the way we employed them] demonstrate that people do recruit their friends, but only if they are provided the right incentive.  If you spread the word, then you get the money.

Q: So, why aren’t organizations using this distributed network model?

Centralized systems are inefficient but they are predictable. In a distributed system you have high efficiency but also have high unpredictability.

Gathering evidence is easy, doing justice is hard. We need to have models that make sense of the data. But currently,  we don’t have this kind of training. It is a new science: “network science” at most, a 10 year-old discipline, and only a few people that can make sense of it. It will take a while for us to be able to use these tools in any concerted way.

Related articles

What role does a volunteer “CrisisMapper” play?

JAROSLAV VALUCH / Standby Task Force

JAROSLAV VALUCH / Standby Task Force (Photo credit: SHAREconference)

Post by: Kim Stephens

It seems there has been a lot of conversations on the #SMEM (or Social Media and Emergency Management) twitter hashtag about using volunteers to help response organizations deal with the huge volume of information that comes from social networks during a crisis. (One conversation was this recent chat.)  Organizing those volunteers into a group with set expectations of what they will provide, and then integrating their work into the response effort,  are the logical next steps.

One organization doing just that is the Standby Task Force (SBTF).  They have set out to “…[turn] the adhoc groups of tech-savy mapping volunteers that emerge around crises into a flexible, trained and prepared network ready to deploy. The SBTF is a volunteer-based network that represents the first wave in Online Community Emergency Response Teams.”

The SBTF  was tasked by the United Nations in March-April, 2011 to provide sense-making to social media data during the ongoing crisis in Libya. Jen Ziemke posted this video to the Crisis Mapper’s blog of Helena Puig from SBTF discussing the  deployment during the ICCM conference .  I thought it really provided some great insights into what went well and what could be improved.

Another great resource, for those interested in the topic, is this google doc: Standby Task Force UN OCHA.  It is their After Action Report of the Libyan effort.

Crowdsourcing Down Under

Post by: Kim Stephens

The Brisbane, Australia City Council has deployed a Ushahidi map in response to flooding occuring in their community.  What is Ushahidi? Watch the 2 minute video on their website, but in general the software allows for reports from anyone (the public, first responders, government agencies) to be submitted and posted to an interactive map.

For this instance of Ushahidi they are currently only displaying three categories of information: flooded roads, road closures and sandbag locations.  For example, the map above shows a pink dot for all road closures.  Brisbane City Council also has a twitter feed they are using to not only provide critical information, but also to advertise the map’s existence.  Citizens can report information about the flood by sending a tweet to the hashtag #bccroads or by filling in the form on the website. People are also clearly sending information by “@” messaging the Council via twitter. (Ushahidi has a mobile platform, however, I’m not sure if that application is being utilized for this event.)
I like how the Council replies to the reports of flooding by also reminding the citizen, as well as everyone else, about the map.

The benefit of this map, which includes highly decentralized, hyper-local information, is demonstrated  by simply clicking on one of the icons. Each blue dot represents a road closure that the user can click to obtain the full report, pictured above. This report states “Bowman Parade (road) is currently experiencing localized flooding. Please do not attempt to drive through flood waters.” The platform also allows the user to understand if the information has been verified or not, and in this case, is has been. There are 64 reports currently listed.

This isn’t groundbreaking. However, I am intrigued that a government agency has so completely embraced crowdsourced information. They understand that first responders can’t be everywhere, but citizens, armed with cell phones and an easy way to report what they are seeing, can provide critical, life-saving information for the benefit of everyone. I read a blog post just yesterday by an American first responder who  lamented that there was no great way to gather information from the crowd. I’m always a bit surprised to read posts like that, which is why I continue to write about Ushahidi and similar applications. If you are aware of any US government, local or state, that has deployed Ushahidi, let me know.

Crisis Mapping, Crisis Crowdsouring and Southern Storms

Post by: Kim Stephens

Photo courtesy FEMA photo library: Heckleburg, AL

A couple of weeks ago the #SMEMchat group discussed crowdsourcing and crisismapping and I’d like to revisit that topic again today. Whenever I give talks on this subject a lot of people indicate that they have never heard of crowdsourcing. But the wikipedia definition is fairly straightforward: Crowdsourcing is the act of outsourcing tasks, traditionally performed by an employee or contractor, to an undefined, large group of people or community (a “crowd”), through an open call. Jeff Howe, one of the first authors to employ the term, established that the concept of crowdsourcing depends essentially on the fact that because it is an open call to an undefined group of people, it gathers those who are most fit to perform tasks, solve complex problems and contribute with the most relevant and fresh ideas.

I think this definition is best understood by using an example. If you have ever watched the local news and heard the station ask folks to send in pictures of an event (usually weather) then you are witnessing crowdsouring. There is an implicit incentive structure here: the local news channel gets to choose from hundreds of pictures and does not have to hire a photographer; those who contribute get to have their picture shown on television. (There are many books and articles written on this topic–see my bibiliography, I have an entire section on the topic.)

Essentially there are two types of crowdsourcing during a crisis:

1. The provision of intelligence/information.

During this past week’s Senate Committee on Homeland Security & Governmental Affairs hearing on the importance of social media in emergency management, the Administrator of FEMA, Craig Fugate, alluded to crowdsourcing by referring to citizens as sources of information. The “task” that is being outsourced is simply the task of providing information from the field. This information answers the first question after a crisis: What just happened?

Washington, DC, April 22, 2009 -- W. Craig Fug...

Image via Wikipedia

In Administrator Fugate’s written testimony, he states: “We value two-way communication not only because it allows us to send important disaster-related information to the people who need it, but also because it allows us to incorporate critical updates from the individuals who experience the on-the-ground reality of a disaster.” In other words, the technology now exist to crowdsource the inflow of critical data regarding the situation after a crisis. Just like the local news station asking people send in pictures, emergency managers can have access to information that allows them to understand the event from the perspective of those immediately impacted, just by monitoring YouTube. For example, there were hundreds of videos of the tornados in Alabama on Youtube, posted in realtime, giving anyone with a computer or a smart phone a great perspective on the amount of damage that likely occurred. http://www.youtube.com/watch?v=8HhzDs1B4YA&feature=related

2. Gathering/Sorting/Making Sense of Crisis Data

If people in emergency management are uncomfortable with data provided by the crowd via social media, then this second type of crowdsourcing is even more uncomfortable: Asking the “crowd” to help gather and sort this data. The emergency management community, however, should understand this concept because it has become a reality after every recent disaster.

There are many organizations whose stated missions are to help organize the crowd as well as the data: CrisisCommons, Org9, Humanity Road, Crisismappers: StandbyCrisis Task Force; Sahana, Tweak the Tweet, and Ushahidi–I’m sure I’m missing a few. The mission of CrisisCommons and the sub-group CrisisCamp–whose co-founder Heather Blanchard also testified before the above-mentioned congressional committee, can serve as an example of the type of support these groups provide after a crisis:

…to connect a global network of volunteers who use creative problem solving and open technologies to help people and communities in times and places of crisis. CrisisCampers are not only technical folks like coders, programmers, geospatial and visualization ninjas, but we are also filled to the brim with super creative and smart folks who can lead teams, manage projects, share information, search the internet, translate languages, know usability, can write a research paper and can help us edit wikis.

The recent tornadoes provide a great example of all of the above mentioned groups’ work. One of the task they are currently performing can be boiled down to one sentence: matching need with the desire to help. This, of course, is fundamental task after a crisis: all disaster plans have Volunteer and Donations Management Support Annexes because it is well understood that if not well organized and planned for, volunteers and donations can sometimes hinder instead of help the response and recovery efforts. Furthermore, volunteers that are turned away can become vocal about not being “allowed” to help, creating a political problem for the responding organizations as well.

One of the main tasks these new volunteer organizations perform is what Patrick Meier, co-founder of CrisisMappers, describes as crowdfeeding: providing information from the crowd for the crowd–skirting or bypassing the “official” response organizations altogether.

How is this done? New communications technologies, such as social media, allow people to broadcast their needs to anyone willing to listen. Above is an example of a “tweaked” or MT–modified tweet, by a Humanity Road volunteer. The stated need was posted by the Salvation Army, who tweeted that they are serving meals for volunteers (the hyperlink provided allows those interested in helping serve get more information). RVAREGal simply tweaked the information so that it could easily be read by a computer for inclusion in a database. She added hashtags for #need, #info and location #loc. As this database of information is built, it can then be upload into a visualization platform, such as Ushahidi. This has been done in support of the southern storms and you can view it at Alabama Recovery Information Map.

The Ushahidi map is a not only a visualization of the information, but also creates an entire ecosystem: links to original source, the date it was submitted (which is key since this information does expire); a description of the information and additional reports with similar data; as well as a form to submit needs or resources directly.

In conclusion, I always like to ask: what are we learning?

  • Non-governmental organizations and the volunteer technical community are working to gather data from the crowd after a crisis and put it into platforms easily used and understood by the general public–they are not waiting for permission by any government agency, and they are usually not registered as a “VOAD”.
  • The public freely shares information about their situation (I need help/I can provide help) on social platforms that can be seen by anyone in the world–not just local response officials.
  • Response organizations could turn to the volunteer technical community for help in sorting through large amounts of data after a crisis in order to process it into usable information.

As Patrice Cloutier stated on the chat, “We ask the public to be prepared, with social media plus mobile technology, they also want to participate….embrace it!”

Resources:

For more information on the origins and descriptions of crisis mapping see “What is Crisis Mapping? An Update on the Field and Looking Ahead” by Patrick Meier.

Crisis data, it’s not just for response organizations.

Post By: Kim Stephens

Recently, Jeannette Sutton wrote a brief article about what they are finding in New Zealand regarding the use of social media and data in general in the ChristChurch earthquake disaster. Her title: “Competing information, complementary information, coordinating information” sums up some of the problems citizens have with regard to understanding where to find accurate and trustworthy info on social media and other online platforms.  “Without a central, authoritative site members of the public must make serious evaluations about which information will lead to their decision-making and actions.” She noticed, however, that quite a bit of this online info, including social media, isn’t a restatement of official info, or even a contradiction as some might assume, but rather serves as a complement. She states:

For instance, volunteer technical communities that mobilize resources early on aggregate and map information from the crowd. This is a complementary activity to those serving in official capacities that are responsible for critical infrastructure and emergency response. There are other examples that also show this complementary nature of efforts that may or may not duplicate data sources, but serve specific populations and needs at varying points of the response.

The nature of this complementary data was discussed a bit in a conversation this weekend about the upcoming National Level Exercise ’11 with Heather Blanchard of CrisisCommons. We talked about the importance of everyone–citizens, government response organizations and NGOs, having access to data that she calls community indicator dataCommunity Indicator Data could be defined as any data regarding the location and state of the infrastructure that serve the affected community. This could include: shelters, grocery store availability, communications (i.e. state of cell phone towers and telecoms) hospitals, banking/ATMs, water, fuel, power, etc. Some of this data can be highly localized and can fluctuate often during the recovery. But as Ms. Sutton also points out in her article, for citizens, it is important to be able to access this information in a meaningful format.

The “ownership” of this data varies widely. Some of the information is from the private sector (e.g. grocery stores, fuel, power); some is from non-governmental organizations (e.g. shelters and feeding centers); and some is citizen or user-generated (e.g.”I’m willing to open my well of clean water for those who live nearby). User-generated data can be curated by volunteers from social media feeds such as twitter, news feeds, and/or blogs, etc or can even be sent directly to those curators via text message or email. The crisismapping community understands that citizens need access to all of this information–not just response organizations.  Their contribution is to analyze, sort, validate and format this data into visualization platform–the picture above is of a Ushahidi map from Christchurch, NZ of available fresh water after the quake.

Google has also taken up the role of sorting, filtering, and visualizing crisis data as is evident in their expanding and ongoing role in the aftermath of the Japan earthquake . Their public policy blog details the resources they have made available to all involved: impacted citizens, concerned family members, news media, first responders, and volunteer organizations. The person-finder application has now been deployed for many disasters, and it was up and running with two hours of the earthquake. It seems they have learned from each deployment, and for this event, for example, they have made the service a little easier to use for people without smartphones.

Low-tech meets high-tech:

I’ve seen several news reporters standing in shelters next to a wall of paper with lists of names people missing. Google has reached out to shelter occupants and asked them to take pictures of those lists and email them to the company. The article explains: “Those photos are automatically uploaded to a public Picasa Web Album. We use scanning technology to help us manually add these names to Person Finder; but it’s a big job that can’t be done automatically by computers alone, so we welcome volunteers with Japanese language skills who want to help out.”

Google understands the need for citizens to have access to community indicator data which is why they are providing timely updates of  rolling blackouts.  They are importing data from Honda, to display a map of impassable roads. Other data available:

…a Google Earth mashup with new satellite imagery. We’re also constantly updating a master map (in Japanese and English) with other data such as epicenter locations and evacuation shelters. And with information from the newspaper Mainichi, we’ve published a partial list of shelters.

Satellite images
We’re also working with our satellite partners GeoEye and DigitalGlobe to provide frequent updates to our imagery of the hardest-hit areas to first responders as well as the general public. You can view this imagery in this Google Earth KML, browse it online through Google Maps or look through our Picasa album of before-and-after images of such places asMinamisanriku and Kesennuma.

Since Japan has a very robust emergency response system, and their citizens are very resourceful, it is interesting to see what role this non-traditional response organization– Google– is playing during the crisis. But in general, I think this year has taught us the importance of having publicly available data. Although Dr. Sutton was talking about New Zealand, I think some of her findings apply to the Japan situation as well. She states: “…this complementary nature of efforts may or may not duplicate data sources, but serve specific populations and needs at varying points of the response.”