Extracting meaning from data

Doctor

Extracting meaning from data is the central business of the information era. ML Market is a European consortium of leading researchers that span a range of areas in information processing, data analysis, statistics and machine learning.

ML Market groups are formed from world-leading research groups within the Pascal European network that also actively engage to find business solutions to challenging real-world problems. ML Market exists to promote the academic and industrial expertise of its researchers and provides a platform to engage and broker industrial contacts.

Case Studies

Bonaparte Disaster Victim Identification System

Society is increasingly aware of the possibility of a mass disaster. Recent examples are the WTC attacks, the tsunamis, and various airplane crashes. In such an event, the recovery and identification of the remains of the victims is of great importance, both for humanitarian as well as legal reasons. Disaster victim identification (DVI), i.e. the identification of victims of a mass disaster, is greatly facilitated by the advent of modern DNA technology. In forensic laboratories, DNA profiles can be recorded from small samples of body remains which may otherwise be unidentifiable.

Winestein, the computer with taste for wine

Doctor A common problem when you organize a dinner or when you are in a restaurant and you get the wine list: what wine goes best with your dish?

Winewinewine.com is a web-portal for wine. One of its distinguishing features is Winestein, the on-line sommelier. You can enter any dish of your choice by entering ingredients and cooking method. Then winestein will advise matching wines.

What are you looking at?

You’re waiting at the station for your train and you glance at the electronic poster next to you. It notices that you’re looking at it, and from your gaze it works out what you would most like to see. It’s as though it’s reading your mind – but really it’s reading your eyes.

Topics

Automatic Speech Recognition and Understanding

Dion speechHuge amounts of audiovisual media are generated on a daily basis: parliamentary session, private meetings, TV and radio shows, public speeches, medical recordings, and many more. The magnitude of such quantity of information makes it impossible to be managed efficiently solely by human intervention. Automatic Speech Recognition and Understanding (ASRU) comes in handy when managing and indexing automatically such large amounts of audiovisual content.

Multimodal Interaction and Adaptive Learning

Traditional Pattern Recognition (PR) and Machine Learning (ML) have generally focused on full automation; that is, in developing technologies ultimately aiming at fully replacing human beings in tasks that require complex perceptive and/or cognitive skills. However, full automation often proves elusive or unnatural in many applications where technology is expected to assist rather than replace the human agents. This asks for a paradigm shift which should place PR/ML within the framework of human interaction.

Multimodal Interaction and Adaptive Learning deals with the fundamental work needed to address the research challenges and opportunities entailed by this paradigm shift. These include: interaction analysis and modelling, multimodal processing and fusion, interactive performance estimation and measurement, and several emerging forms of machine learning that look especially promising in the interactive framework (online, adaptive, active, semi-supervised, limited feedback, reinforcement, etc.).

Text mining

Text mining, sometimes alternately referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the divining of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).