Extracting meaning from data

Doctor

Extracting meaning from data is the central business of the information era. ML Market is a European consortium of leading researchers that span a range of areas in information processing, data analysis, statistics and machine learning.

ML Market groups are formed from world-leading research groups within the Pascal European network that also actively engage to find business solutions to challenging real-world problems. ML Market exists to promote the academic and industrial expertise of its researchers and provides a platform to engage and broker industrial contacts.

Case Studies

Winestein, the computer with taste for wine

Doctor A common problem when you organize a dinner or when you are in a restaurant and you get the wine list: what wine goes best with your dish?

Winewinewine.com is a web-portal for wine. One of its distinguishing features is Winestein, the on-line sommelier. You can enter any dish of your choice by entering ingredients and cooking method. Then winestein will advise matching wines.

The Desktop Doctor

DoctorHundreds of years of medical experience. An infinite patience and the ability to take every symptom into account. Precise and logical, up-to-date, and never short on ideas. All just casually sitting on your doctor’s desk. It may not have much of a bedside manner, but then its job is not to meet patients.

What are you looking at?

You’re waiting at the station for your train and you glance at the electronic poster next to you. It notices that you’re looking at it, and from your gaze it works out what you would most like to see. It’s as though it’s reading your mind – but really it’s reading your eyes.

Topics

Text mining

Text mining, sometimes alternately referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the divining of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).

Automatic Speech Recognition and Understanding

Dion speechHuge amounts of audiovisual media are generated on a daily basis: parliamentary session, private meetings, TV and radio shows, public speeches, medical recordings, and many more. The magnitude of such quantity of information makes it impossible to be managed efficiently solely by human intervention. Automatic Speech Recognition and Understanding (ASRU) comes in handy when managing and indexing automatically such large amounts of audiovisual content.

Interactive Natural Language Processing

KeyboardThe current state of the art in different areas of natural language processing (NLP) is very far from allowing fully automatic high quality results (HQRs), therefore human intervention is required to correct the output of the NLP engines. This applies specifically to NLP fields such as: machine translation and cross-language processing, text recognition, parsing, speech recognition, information retrieval, etc.

Its goal is to produce HQRs through a tight collaboration between a human operator and a NLP system, following an interactive-predictive paradigm. On the other hand, interactivity offers a unique context in which the feedback provided by the human can be used as new training data for adapting the NLP systems to new environments.