Extracting meaning from data

Doctor

Extracting meaning from data is the central business of the information era. ML Market is a European consortium of leading researchers that span a range of areas in information processing, data analysis, statistics and machine learning.

ML Market groups are formed from world-leading research groups within the Pascal European network that also actively engage to find business solutions to challenging real-world problems. ML Market exists to promote the academic and industrial expertise of its researchers and provides a platform to engage and broker industrial contacts.

Case Studies

Bonaparte Disaster Victim Identification System

Society is increasingly aware of the possibility of a mass disaster. Recent examples are the WTC attacks, the tsunamis, and various airplane crashes. In such an event, the recovery and identification of the remains of the victims is of great importance, both for humanitarian as well as legal reasons. Disaster victim identification (DVI), i.e. the identification of victims of a mass disaster, is greatly facilitated by the advent of modern DNA technology. In forensic laboratories, DNA profiles can be recorded from small samples of body remains which may otherwise be unidentifiable.

The Desktop Doctor

DoctorHundreds of years of medical experience. An infinite patience and the ability to take every symptom into account. Precise and logical, up-to-date, and never short on ideas. All just casually sitting on your doctor’s desk. It may not have much of a bedside manner, but then its job is not to meet patients.

What are you looking at?

You’re waiting at the station for your train and you glance at the electronic poster next to you. It notices that you’re looking at it, and from your gaze it works out what you would most like to see. It’s as though it’s reading your mind – but really it’s reading your eyes.

Topics

Automatic Speech Recognition and Understanding

Dion speechHuge amounts of audiovisual media are generated on a daily basis: parliamentary session, private meetings, TV and radio shows, public speeches, medical recordings, and many more. The magnitude of such quantity of information makes it impossible to be managed efficiently solely by human intervention. Automatic Speech Recognition and Understanding (ASRU) comes in handy when managing and indexing automatically such large amounts of audiovisual content.

Handwritten Text Recognition

Many documents used every day are handwritten documents, as for example, postal addresses, bank cheques, medical prescriptions, a big quantity of historical documents, an important part of the information gathered by forms, etc. In many cases it would be interesting to have these documents in digital form rather than paper based, in order to provide new ways to indexing, consulting and working with these documents.

Handwriting text recognition (HTR) can be defined as the ability of a computer to transform handwritten input represented in its spatial form of graphical marks into equivalent symbolic representation as ASCII text. Usually, this handwritten input comes from sources such as paper documents, photographs or electronic pens and touch-screens.

Text mining

Text mining, sometimes alternately referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the divining of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).