Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Luminoso’s approach follows a different path that is known as Common sense Sense AI. The Common Sense AI approach, used by Luminoso’s QuickLearn technology, requires neither massive amounts of data nor manual updates by experts.

QuickLearn technology makes use of word embeddings, in which each word is a vector, and vectors in similar directions represent words with similar meanings. The system then understands human language by creating a vector’s list of numbers to represent each word. Unlike vectors in 3-D space, each word embedding may represent hundreds of dimensions to capture its meaning and nuances. Compared to other machine learning methods, it’s much less computationally intensive to manipulate these vectors mathematically. QuickLearn creates a semantic space, which is a table of word embeddings representing its understanding of words. Luminoso then uses a deep learning technique known as transfer learning. Transfer learning uses what an existing Machine Learning system has learned to solve a new task, with new data. Effective transfer learning can allow you to get started solving a problem with much less data, because the system doesn’t have to learn everything from scratch. In this case, the problem is understanding your domain text -- for example, clinical trial questionnaires, medical inquiries, or salesforce.com CRM notes from MSLs. Luminoso’s QuickLearn provides transfer learning that learns about your domain-specific terminology from a database of general knowledge known as a background space.

A background space is a space of word embeddings representing what words mean in general, not in a specific domain. The background space is meant to represent things that are “common sense knowledge” and things that are the inherent definitions of words. Luminoso uses a background space called ConceptNet. ConceptNet represents a common sense knowledge of how the world works: 32 over 35 million relationships between concepts mathematically represented in a general domain model. When presented with domain-specific text, such as medical terms, it can apply its general domain knowledge to learn specific terms immediately from context, without manual intervention, like a bicyclist learning to ride a recumbent bicycle.

...