Skip to main content
InterACT Lisbon - Conversational Machine

Final session of the conference is one of the highlights - Robert Hecht-Nielsen presenting about his work developing a conversational machine. This is a long-term research project that Robert runs at Fair Isaac working on building a machine that is "conversational". The project is called "Chancellor", after the kind of person who would handle every request of royalty. The idea is that it will allow the construction of a machine/service that can provide this service for a family.

His first example is of a person saying "looks like we are almost out of cat food" to her "chancellor". Processing this involves the machine taking this arbitrary statement that it has not necessarily heard before and being able to successfully interpret it. Clearly there is a whole layer of services below this to make it happen (ordering cat food, arranging delivery etc) but the key is to pick up the natural language and use it effectively. This is the research Robert has been conducting.

The rationale for the project comes from the facts that mass literacy is comparatively recent and even in the US is limited with only 13% having full literacy and comfort with reading and writing with another 44% having some major flat spot and 43% have real problems. Thus you cannot rely on writing to reach 100% of the population unless you have a conversational interface.

Robert drilled into how cognition is thought to work and how the cerebral cortex handles knowledge. Essentially each cortical module describes one attribute of an object in your mental universe - a symbol. Knowledge, he says, is a link from one such symbol to another. Repeated exposure builds links between these symbols and these represent 100% of what you know! Confabulation is a winner-takes-all neural network approach to analyzing all the knowledge links to find the "winner" and this is thinking. This then ripples onwards... Interestingly this process results in logical thinking when it is possible thanks to constraints and patterns. Additionally the lack of precision in the way neurons interact mean you get good answers even if you don't get the best one. When they used the confabulation engine to add a word, the second, third, fourth words selected were often reasonable also. If a nonsense sequence added then none come up. The sentence continuation engine also responded to context sentences.

He identified four key steps to handle speech:

  • Eliminate noise to clarify one person's speech - the "Cocktail Party Processor"
    This predicts what the speaker is likely to say next based on prior few seconds so as to identify it from amongst noise (already working!)
  • Take sounds and derive plausible words
    Human speech is acoustically ambiguous - isolated words can only be identified about 60% of the time. Need to match to plausible lists of words and see which word in each list causes the overall sentence to make sense.
  • Develop gestalt that was actually said
    Does this using an engine that has been exposed to the language extensively and built pairwise links between words that come up a lot as pairs. This eliminates possibilities until only one gestalt is left
  • Finally turned into proper language

Robert then showed how the gestalt processing piece generates a plausible next sentence after having been trained by showing it reasonable sets of sentences. Essentially the first very large number of sentence sets developed "knowledge" in the machine and his was used to generate plausible sentences. No rules or algorithms added, just training to build the links. The engine generates correct, reasonable sentences. Some examples are in the slides and some of these include a surprising amount of world knowledge, not just language knowledge.

Here are a few slides:

  • Literacy Rates
  • The engines of cognition
  • Plausible sentences

You can read his papers on his website at or his confabulation theory at

Anyway, my brain is now full and I am going home...

Technorati Tags: , , , , ,

related posts