Big Data: Friend or Foe?

The amount of information we save, store, and share digitally is growing at a rapid pace. This big data and the sophisticated analytics it enables give us the power to address sustainability challenges in smarter, more informed, and more effective ways. But big data can be costly from an energy and materials use perspective, and raises important questions about privacy, security, and freedom of expression. This one-hour conversation will explore how big data is used by companies and civil society to change behavior and address sustainability challenges as well as what happens if data gets in the wrong hands. How can companies and civil society use it to change behavior and address sustainability challenges? What happens if data gets into the wrong hands? What types of assumptions are made about individuals without their knowing? What special responsibilities arise when business collects big data?



  • Big data is the ability to capture, aggregate, and process an ever-greater volume of data. Big data analytics refers to uncovering patterns, correlations, trends, behavioral history, consumer preferences, and other useful information. Finally, big data applications relates to innovations or smarter decision-making in public safety, law enforcement, health care, education, advertising, fraud prevention, financial services, energy use, and more.

  • Companies using a data tool need to consider the integrity of the software, data, and outcomes or application of the tool itself.

  • Users of big data need to think innovatively about how it can be used to address challenges such as climate change and poverty. However, big data should not be used to respond to questions that it can’t enhance in the first place or to deal with complex problems that involve multigenerational data.

  • The value of trust and thinking holistically are often missing from conversations about big data.

Memorable Quotes

“Big data is a trust proposition. And the link between trust and privacy is stronger today than it ever has been.” —Christina Peters, IBM Corporation

“Transparency can’t solve every problem, but it can help build a relationship of trust that the program is not meaning to do things to people but for them.” —Christina Peters, IBM Corporation

“The risks that big data analytics lead to discriminatory outcomes is real. Addressing these risks will require increased dialogue and shared learning between companies and civil-society organizations.” —Dunstan Allison-Hope, BSR

“When using data, we go for the easy cases first without thinking of how we will apply data to the more difficult problems.” —Seeta Peña Gangadharan, New America’s Open Technology Institute


Dunstan Allison-Hope opened the session by providing some context on big data to ensure that everyone had a shared understanding of the terms defined above (big data, big data analytics, and big data applications). He reinforced the idea that we must consider subtle nuances when talking about big data—labeling it friend or foe does not adequately cover its complexity. He then asked the speakers to introduce themselves.

Christina Peters introduced IBM’s primary services, including delivering robust ICT services and software solutions that enable other companies to make use of big data. She then described IBM as a leader in privacy since the 1970s, when the company first realized the profound privacy challenges emerging with the use of computing technology. She explained how data analytics has evolved and the challenges around ensuring that companies use responsible data practices. Data is coming from everywhere these days—personal interactions with social media outlets, geolocation data from smartphones, and even our cars, which are the most technically sophisticated objects many people own. Used responsibly, data can help organizations conduct business better and make more informed decisions.

Next, Seeta Peña Gangadharan spoke about her work with communities in the United States on civil rights and social justice in the digital sphere to address deeply rooted historical inequities. She cautioned that as big data becomes more commonplace and embedded in everyday interactions, it could be used to automate discrimination and unfairness, both when data analytics delivers inaccurate profiles of individuals, for instance, but also when correct personal profiling is used to take advantage of a person’s weaknesses, such as with predatory lending. In both cases, she stressed that it could have profound impacts on people’s stability, mobility, and ability to determine their personal destinies.

When asked what excites her most about big data and analytics, Peters mentioned the potential transformation of organizations that are able to more fully understand the information they have and use it to craft more accurate predictions. She provided an initial example of cities using smart technologies, such as traffic management programs or smart parking systems, to improve urban life, and a second example related to oncologists using big data to accurately diagnose individual patients based on the exact genome of their tumors.

The speakers then turned to some of the risks of using big data. Peters shared what IBM is doing related to data protection. In addition to implementing global privacy assessments and analyzing global privacy laws, IBM has created a checklist to ensure that the organizations running its software are using big data appropriately.

Peña Gangadharan described some additional risks. She mentioned that people often can’t choose to opt out of having their data collected, they don’t have a say in the design of big data systems, and finally, there is little transparency into how the tools work or deliver outcomes. People have little ability to question these outcomes, which can cause public anxiety. Peña Gangadharan then explained some of the potential discriminatory outcomes related to data analytics—for example, in housing, education, employment, credit, and law enforcement.

To address these challenges, both speakers agreed that companies need to address big data applications holistically, carefully considering the context and how analytic outcomes will be used. They also stressed the need to build trust with the users or people from whom data is collected.

Allison-Hope asked the speakers about the role of companies, governments, civil society, and individuals or users in addressing discriminatory outcomes. In response, Peña Gangadharan added tech innovators to the list and spoke to the importance of ethically designing analytics systems to avoid discriminatory outcomes. Both speakers agreed that all groups share this responsibility and that there is a need for widespread education about and awareness raising around big data.

During the Q&A session, one participant asked if people are requesting to fill in personal data gaps if data outcomes are inaccurate or incomplete. Peña Gangadharan responded that it may be impossible from a technical standpoint, and she emphasized the need to contextualize all data outcomes. Peters added that, when given the opportunity to interact with their own data, people would rather enhance or correct it than opt out of providing it. However, that point raises some interesting questions around who has access to various technologies.

Allison-Hope closed the session by urging participants to consider the risks and opportunities of big data applications and how companies can deliberately integrate considerations for both into their CSR strategies.


November 5, 2014