The ACM Journal of Data and Information Quality (ACM JDIQ) is to publish
a Special Issue on Improving the Veracity and Value of Big Data. This
is a key focus within VADA, and Norman Paton from the project is one of
the Guest Editors for the Special Issue. Further details are available at: http://jdiq.acm.org/CFP-JDIQ-SI-VVBD.pdf
Prof. Leonid Libkin has been awarded an EPSRC Established Career Fellowship. The grant’s title is “MAGIC: MAnaGing InComplete Data – New Foundations“, and its total amount is £1.14M over 5 years, starting 1 August 2016.
The main goal of this research programme is to deliver new understanding of uncertain and incomplete information in data processing tasks, and by doing so to provide new ways of getting knowledge out of such data. It will reconcile correctness guarantees with an efficient algorithmic toolkit that scales to large data sets, and put an end to perceived impossibility of achieving correctness and efficiency simultaneously for large classes of queries over incomplete data.
Data wrangling is the process by which data is identified, extracted, integrated and cleaned for analysis. The New York Times reports that “Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data”.
The VADA project exists to put data wrangling on a firmer footing, in which automation, more systematic use of the available evidence, and carefully targeted user input lead to more efficient data wrangling. One of the goals of the project is to encourage a larger community of researchers and developers to work on techniques and tools for data wrangling. With this in mind, a paper on “Data Wrangling for Big Data: Challenges and Opportunities” has been written by members of the VADA team, and published in the Vision Track of the 19th International Conference on Extending Database Technology, March 15-18, 2016, in Bordeaux, France. This paper makes the case that a concerted effort to address specific challenges in data wrangling can be expected to yield substantial rewards.
Data is everywhere, generated by increasing numbers of applications, devices and users, with few or no guarantees on the format, semantics, and quality. The economic potential of data-driven innovation is enormous, estimated to reach as much as £40B in 2017, by the Centre for Economics and Business Research.
To realise this potential, and to provide meaningful data analyses, data scientists must first spend a significant portion of their time (estimated as 50% to 80%) on “data wrangling” – the process of collection, reorganising, and cleaning data. This heavy toll is due to what is referred as the four Vs of big data:Volume – the scale of the data, Velocity – speed of change, Variety – different forms of data, and Veracity – uncertainty of data.
There is an urgent need to provide data scientists with a new generation of tools that will unlock the potential of data assets and significantly reduce the data wrangling component. As many traditional tools are no longer applicable in the 4 V’s environment, a radical paradigm shift is required. The VADA Programme Grant aims to add value to data by:
- carrying out data management tasks in an environment that takes full account of data and user contexts, and
- integrating and automating key data management tasks in a way not yet attempted, but desperately needed by many innovative companies in today’s data-driven economy.
The VADA research programme will define principles and solutions for Value Added Data Systems, which support users in discovering, extracting, integrating, accessing and interpreting the data of relevance to their questions. In so doing, it uses the context of the user, e.g., requirements in terms of the trade-off between completeness and correctness, and the data context, e.g., its cost, provenance and quality. The user context characterises not only what data is relevant, but also the properties it must exhibit to be fit for purpose.