When looking at the words of Hal Varian, Google’s Chief Economist and professor emeritus at the University of California, Berkeley, thinking of Big Data seems natural. Big Data – a dictum which currently seems to be on everyone’s lips – has recently developed into one of the most discussed topics in research and practice. Looking at academic publications, we find that more than 70 % of all ranked papers which deal with Big Data were published within the last two years (Pospiech and Felden 2012) as well as nearly 12,000 hits for Big Data on GoogleScholar across various fields of research. In 2011, more than 530 academic Big Data related publications could be counted (Chen et al. 2012). We find more hits for “Big Data” than for “Development aid” in Google, and almost daily an IT-related business magazine publishes a Big Data special issue next to a myriad of Big Data business conferences. In Gartner’s current Hype Cycle for Emerging Technologies (Gartner 2012), Big Data is right on the peak of its hype phase, and according to this source a broad adoption is to be expected within the next five years. Big Data provokes excitement across various fields such as science, governments, and industries like media and telecommunications, health care engineering, or finance where organizations are facing a massive quantity of data and new technologies to store, process, and analyze those data. Despite the cherished expectations and hopes, the question is why we face such excitement around Big Data which at first view rather seems to be a fashionable hype than a revolutionary concept. Is Big Data really something new or is it just new wine in old bottles seeing that, e.g., data analytics is doing the same type of analysis since decades? Do more data, increased or faster analytics always imply better decisions, products, or services, or is Big Data just another buzzword to stimulate the IT providers’ sales? Taking the traditional financial service industry, which currently cherishes huge expectations in Big Data, as an example, the collection of massive amounts of data via multiple channels for a long time was part of the business model to customize prices, product offers, or to calculate credit ratings. However, improving financial services by exploiting these huge amounts of data implied constant updating efforts, media disruptions and expensive acquisition and processing of data. Hence, more data resulted in expensive data management, in higher prices for products or services as well as in inconvenient processes regarding the customers’ data entry. Hence, instead of the traditional universal banks that focused on a data-intensive business model, direct banks with a higher grade of standardization and IT support as well as a focus on (very few) key customer data often enough have become more successful. Focusing solely on pure IT-based data acquisition, processing and analysis to save costs on the other side is virtually impossible in industries such as banking due to an intense personal contact. Besides, neither in the financial service industry nor in other industries do more data automatically lead to better data, better business success, better services, better decisions, or (more) satisfied customers. Above all, Big Data brings a lot of still unresolved challenges regarding the volume, velocity, variety, and veracity of data, which should not be underestimated. Often enough, more data even lead to a certain amount of “data garbage” which usually is more easily and better recognized and managed by employees rather than by analytics software (veracity). Additionally, the management of various sources of data such as from, e.g., mobile applications, online social networks, or CRM systems is far from trivial (variety). The high data traffic brings along the challenge of archiving, retrieving, and analyzing huge amounts of data in real-time (volume and velocity). Unsurprisingly, nearly every second Big Data project is canceled before completion (Infochimps 2013). And as if these challenges were not enough, we additionally see a myriad of different legal privacy restrictions in different countries turning into one of Big Data’s most serious challenges.
[1]
Martin Hilbert,et al.
Info Capacity| How to Measure the World’s Technological Capacity to Communicate, Store and Compute Information? Part I: Results and Scope
,
2012
.
[2]
J. Manyika.
Big data: The next frontier for innovation, competition, and productivity
,
2011
.
[3]
Guy Holmes,et al.
The World’s Technological Capacity to Store, Compute and Communicate Information that has Already Been Created and Does not Need to be Done Again – 2012
,
2012
.
[4]
Dong,et al.
Mining Data Correlation from Multi-Faceted Sensor Data in Internet of Things
,
2011
.
[5]
M. Batty.
The New Science of Cities
,
2013
.
[6]
Erik Brynjolfsson,et al.
Big data: the management revolution.
,
2012,
Harvard business review.
[7]
David Stuart,et al.
The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences
,
2015,
Online Inf. Rev..
[8]
Matthew Zook,et al.
Mapping the Data Shadows of Hurricane Sandy: Uncovering the Sociospatial Dimensions of ‘Big Data’
,
2014
.
[9]
Veda C. Storey,et al.
Business Intelligence and Analytics: From Big Data to Big Impact
,
2012,
MIS Q..
[10]
David L. Tulloch.
Crowdsourcing geographic knowledge: volunteered geographic information (VGI) in theory and practice
,
2014,
Int. J. Geogr. Inf. Sci..
[11]
Carsten Felden,et al.
Big Data - A State-of-the-Art
,
2012,
AMCIS.
[12]
Martin Hilbert,et al.
The World’s Technological Capacity to Store, Communicate, and Compute Information
,
2011,
Science.
[13]
Alexander Cachinero Vasiljevic.
Big Data Is Big Business
,
2015
.