THE 10TH INTERNATIONAL CONFERENCE AND EXPO ON EMERGING TECHNOLOGIES FOR A SMARTER WORLD
暂无分享,去创建一个
5 DAY 1 • TRACK A Big Data Architecture & Analytics: Session 1 Session Chair: Rong Zhao, CEWIT Using Data Science for IT Improvement Revathi Subrmanian, CA Technologies Revathi.subramanian@ca.com Some of the most common uses of Data Science are; a) Isolate and diagnose problems faster, b) resolve problems faster, and c) generate better business centric insights. IT mangers spend too much time on troubleshooting problems, users experience the problems before IT operations detect them, IT management tools produce unusable output, current systems require extensive levels of instrumentation and commitment of expert resources, IT users have to spend too much time to define which metrics need to be monitored and how KPIs interact in normal operations, and, IT management systems require extensive dashboard and/or alert rules development. Data Science can help alleviate these problems at three levels. At the most basic level, Data science can provide simple solutions that can automate thresholding, generate histograms in a real-time manner that can be used to surface the most important problems and use data driven formulas for determining thresholds and baselines. All of the level 1 approaches can be thought of as univariate approaches. At a more sophisticated level, Data science can provide model based solutions that are multivariate. This includes using neural networks for detecting anomalies. These models can adapt over time in an automated fashion without much human interference, and can perform real-time analytics on very large volumes of data. At level 3, Data science can even help IT managers forecast possible problems before they occur using univariate as well as multi-variate methods. CA’s Data Science team has considerable experience in numeric data as well as textual data. For example, the team can look at data generated by service desks along with techniques such as content based filtering to provide service requesters with more appropriate documents to solve their problems and collaborative filtering techniques to show the most useful answers for problems based upon the experiences of other users. This talk will focus on the exciting new ways in which Data Science could transform IT Management. A Modular Factory Planning Approach using the VPI Platform Max Hoffmann, Aachen University Max.Hoffmann@ima-zlw-ifu.rwth-aachen.de Ying Wang, Daniel Ewert, Daniel Schilberg, Sabina Jeschke, RWTH Aachen University The continuously increasing complexity of products as well as mechanical and automated production processes makes the analysis and optimization of these processes one of the most challenging tasks in modern factory and production planning. In order to increase the quality and the efficiency in the planning phase of manufacturing plants, model based approaches need to be developed and implemented. Within this process, a large number of production parameters and multiple scenarios have to be taken into account to optimize the manufacturing process in an appropriate way. In practice, factory planning, which is one of the main applications of production planning, is mostly performed ¬by dividing the process into several steps. Hereby, the challenges are to integrate the different disciplines of this approach on a more abstract level and to systematize iterative processes. The present paper aims at presenting a modular, holistic factory planning approach. How Cloud Computing and Big Data Are Changing the Medicines You Use Brian Spielman, Medidata Solutions spielman@mdsol.com The pharmaceutical industry finds itself surrounded by pressures. An aging population is demanding more treatments for lifestyle as well as life-critical conditions. New, targeted treatment and diagnostic pathways are being uncovered. Stringent regulations and public demands for safety are growing. And, of course, pricing pressures continue to force drug companies to reexamine the ways they bring their products to market. A substantial part of that process – the testing of drugs through clinical trials, which accounts for over $80 billion yearly – operates pretty much the same way it has since the 1950’s, with phase-gate trial processes, departmental silos and legacy patchwork installed computer systems. A slower adopter of enabling technology than, say, the financial and retail sectors, pharma is only now opening up to the potential for changes to its process. While technology to enable specific activities in the clinical trial process has been used for decades, newer solutions based on cloud computing and big data are being pulled – and pushed – into these companies, leading to a recalibration of their scientific and business models. The technology itself is significantly contributing to its uptake: the newer SaaS platforms are giving the different members of the research team real efficiencies and performance enhancements. This in turn is opening up measurable process efficiencies and enabling clinical trial redesigns that emphasize quicker go/no go decisions. Big Data Analytics for Research Libraries Andrew White, Stony Brook University andrew.white@stonybrook.edu Research libraries invest millions of dollars in information sources that are no longer physical in nature, but are still needed to support the missions of their host company or institution. With the shift to digital publishing and networked access to content, it has become more important for libraries to assess the value of virtual collections hosted in the commercial publishing environment. Additional information about the cross-discipline relevancy of digital content is also important when considering budgetary decisions which could negatively impact an institution as access to library information is discontinued. This presentation will show how libraries can use analytics to better understand the dependencies and relationships between library collections and the institutions they support. The analytics to be shown will demonstrate how they are captured and how data mining can reveal demographic use data of library collections, assist in improving institutional collection development, and create accountability /transparency of expenditures supporting research, business, and education.