Conceptual Model for Successful Implementation of Big Data in Organizations

ABSTRACT The term 'big data' has gained huge popularity in recent years among IT professionals and academicians. Big data describes the massive amount of data that can be processed and analyzed using technology to gain business values that will help organizations to achieve competitive advantages. The current paper aims to develop a holistic model that includes the factors that would affect the success or failure of the implementation of big data in organizations. Furthermore, this research examines the opportunities that organizations would attain from implementing big data, as well as the challenges that could hinder this implementation. The proposed model provides IT managers and decision makers the important factors that they need to consider when deciding to implement big data in order to ensure that it achieves the competitive advantage. KEYWORDS: Big Data, opportunities, challenges, implementation INTRODUCTION The interest of big data has increased because of the significant amount of data generated every day. Data is getting bigger because it is continuing to be generated from more devices and more sources such as personal computers, mobile phones, government records, healthcare records, social media, street sensors, climate sensors, airport terminals, hypermarkets' points of sales, etc. These sources generate a massive amount of data and it will continue to generate more and more data as time passes since people are getting more dependent on technology. As anticipated by Cisco Visual Networking Index (VNI) report (2015), mobile data traffic is expected to grow to 24.3 Exabytes per month by 2019 because of increased usage on smartphones. This is nearly a tenfold increase over 2014. A study by Intel also showed that data has increased enormously in the last decade. It showed that humankind has generated five Exabytes until 2003. From 2003 to 2013, data has increased to reach 2.7 Zettabytes (i.e. 500x more data). Data will continue to increase to three times bigger than that by 2015. In the same context, Das et al. (2013) pointed to the rapid growth of global data. He mentioned that it took from the down of time to 2003 to create five Exabytes of information whereas now the same volume of data is created in just two days. This will continue to reach eight Zettabytes by 2015 (i.e. that is the equivalent of 18 million Libraries of Congress). Although data is increasing enormously, a very small fraction of this data has been exploited; the rest is not tapped yet. According to IBM and Intel, 90% of data is unstructured and is not used. Data can be classified into structured and unstructured data. Structured data refers to data that can be organized and stored in relational databases so it can be easily used and searched efficiently. Unstructured data refers to data, which does not have a pre-defined data model, or it is not organized in a per-defined manner such as videos, photos, images, emails, text documents and blogs. Searching and analyzing of unstructured data is more difficult than for structured data. Das et al. (2013) also argued that unstructured data would account for 90% of data in the next decade where analyzing this massive amount of data would expose new improvements in business that were impossible to determine previously. Indeed, the interest of big data has increased because it is supposed to have a significant impact on the organizations and this would be achieved by analyzing the unstructured data. According to a survey conducted by IDG Enterprise (2014) amongst more than 750 IT decision-makers in 2013, the interest in big data continues to rise, as nearly half of the respondents (50%) are implementing or planning to implement big data projects within their organizations. BIG DATA DEFINITION Although the term has gained huge popularity in recent years, it is still poorly defined and there is huge ambiguity regarding its exact meaning (Hartmann et al. …