Christian Reimsbach-Kounatze | Data-driven Innovation for Growth and Well-being

OECD, the Organisation for Economic Co-operation and Development
Wednesday, October 5, 14.00

Summary
Data has become a key infrastructure for 21st century knowledge economies. Data are not the “new oil” as still too often proclaimed. They are rather an infrastructure and capital good that can be used across society for a theoretically unlimited range of productive purposes, without being depleted. Data provide economies with significant growth opportunities through spillover effects in the support of data-driven innovation (DDI). And as with any infrastructure, there can be significant (social) opportunity costs in limiting access. Open (closed) access therefore enables (restricts) user opportunities and degrees of freedom in the downstream production of private, public and social goods and services. The spillover effects make calculating the overall economic benefits of data very challenging. In addition, optimal pricing can be hard to determine also due to the fact that data have no intrinsic value: their value depends not only on data specific factors such as the accuracy and the timeliness of data (data quality), but also on the context of their use and the availability of complementary resources.

Christian Reimsbach-Kounatze is an Information Economist / Policy Analyst at the OECD Directorate for Science, Technology and Innovation (STI). Christian has been working at STI on issues related to the digital economy since 2008. This includes in particular work on the economic performance of the world largest ICT firms, the impact of ICTs on skills and employment, and more recent work on the economics of “big data”, where he also coordinates the OECD project on “Data-Driven Innovation for Growth and Well-Being” . He currently works in the role of demand side policies for stimulating digital innovation.

Prof. Dr. Volker Markl | Big Data Management and Scalable Data Science

Berlin Big Data Center
Thursday, October 6, 8.30

Summary
The shortage of qualified data scientists is effectively limiting Big Data from fully realizing its potential to deliver insight and provide value for scientists, business analysts, and society as a whole. Data science draws on a broad number of advanced concepts from the mathematical, statistical, and computer sciences in addition to requiring knowledge in an application domain. Solely teaching these diverse skills will not enable us to on a broad scale exploit the power of predictive and prescriptive models for huge, heterogeneous, and high-velocity data. Instead, we will have to simplify the tasks a data scientist needs to perform, bringing technology to the rescue: for example, by developing novel ways for the specification, automatic parallelization, optimization, and efficient execution of deep data analysis workflows. This will require us to integrate concepts from data management systems, scalable processing, and machine learning, in order to build widely usable and scalable data analysis systems. I will present some of our research results towards this goal, including the Apache Flink open-source big data analytics system, concepts for the scalable processing of iterative data analysis programs, and ideas on enabling optimistic fault tolerance.

Volker Markl is a Full Professor and Chair of the Database Systems and Information Management (DIMA) group at the Technische Universität Berlin (TU Berlin) and also holds a position as an adjunct full professor at the University of Toronto. He is director of the research group “Intelligent Analysis of Mass Data” at DFKI, the German Research Center for Artificial Intelligence and director of Berlin Big Data Center, a collaborative research center bringing together research groups in the areas of distributed systems, scalable data processing, text mining, networking, machine learning and applications in several areas, such as healthcare, logistics, Industrie 4.0, and information marketplaces.

Martin Neuenhahn | „Always On“ – essential capability for supply chain resilience

Software AG
Friday, October 7, 8.30

Summary
The accessibility of data is becoming possible everywhere at rapidly decreasing cost. The sheer amount of data points stored allow all industries to deepen and widen their understanding of their businesses and opens the door to new applications by combining these insights with existing products and processes or building a new business just on the data, a true new digital business.
The real value will not come from stored data – the past. It will come from the ability to influence the actual activities or even predict the future. To realize this you need real-time transparency of your complete end-to-end data value chain and combine this with the acquired understanding of the predicted future behavior of your business.
Based on this transparency, automated, proactive and self-optimizing actions can be started and executed. To unleash the full power of an “Always On” supply chain, it has to be digitally consistent and cover the whole end-to-end process, including warehouses and factories, supplier and partner.
As the world is changing faster constantly, the business models and the supporting supply chain will have to adapt permanently. In this respect the “Always-On” supply chain is flexible and adaptive. This leads to competitive advantages as new business models or sales channels are realized as agile application.
This talk will cover how to turn your supply chain in a big data value chain in an “Always On” mode with real-world examples how companies are or are becoming “Always On”.

Martin Neuenhahn is a Business Consultant and Business Development Manager for the area of production industries at Software AG, Germany. As an expert for Industrie 4.0 (smart production) and the Internet of Things, he supports customers of Software AG in coping the challenges related to the digitization trend and being among the winners in exciting times. He studied electrical engineering and business administration at RWTH Aachen, Germany. Afterwards, he worked for 3M and developed products for bridging the gap between the analog and digital world.

Eric van der Vliet | Solution Based Estimation

CGI Estimation Center
Friday, October 7, 15.30

Summary
Estimation of complex IT systems is a challenge due to the large amount of components, different types of technologies and various integrations that are required. Estimation methods and metrics methods often assume a certain level of homogeneity across the system for which the cost are estimated what is mostly not the situation. Such homogeneity is required to be able to extrapolate and re-use metrics from previous, comparable versions of the system. Within CGI the estimation of complex solutions that consist of multiple substantially different software, infrastructure and often organizational elements was identified as a risk. Existing estimation methods didn’t support this complexity and there was a need for an improved method to manage the step from solution to costing to project control. This method is called Solution Based Estimation and is based on the fact that solutions are heterogeneous. Architects, cost engineers and project managers must understand this heterogonous solution to be able to create a reliable estimation of effort, cost and duration. This presentation explains the concepts of Solution Based Estimation and shows an example based on a package implementation. Solution Based Estimation does not replaces other estimation methods but provides the possibility to integrate various estimation methods and involved disciplines resulting in an improved traceability and reliability of complex IT solution estimates.

Eric van der Vliet is working in IT for over 25 years with experience in process improvement, solution architecture, technical assurance and software cost engineering. In a previous company he was responsible for the set-up of a global Estimation and Metrics Desk for outsourcing services. Currently he is a Director of CGI’s global Estimation Centre that has the responsibility to perform estimation verifications and the management of metrics of internal engagements. As a program manager of CGI’s global estimation evolution program he is responsible for the improvement of primary estimation capabilities within CGI. In this role he created the Solution Based Estimation approach to improve the traceability from solution to estimates and project control and to achieve a better alignment between architects, cost engineers and project managers. As a board member of the Dutch metrics association Nesma, Eric is responsible for the Nesma 2020 program with the objective to increase the scope from sizing to cost engineering. Eric represents Nesma in a program steering committee of a consortium between ICEAA, Nesma and IFPUG that is working on the development of an international certification for Software Cost Engineer.