Web chatbots: the next generation of speech systems?
暂无分享,去创建一个
C ore speech recognition technology hasn’t changed much in the past 20 years; the advances have been in how speech is embedded in applications, rather than the core pattern-recognition algorithm. However, a step-change in this pattern-matching may be coming, leading to a new market for web-based chatbots who can chat naturally, without trying to sell you something, at least, not obviously. Over the past 20 years, speech technology has moved from research lab prototypes to viable commercial products, thanks to faster processors, better integration into applications, and simpler, more user-friendly training and adaptation. However, the underlying speech recognition algorithms have not changed much in the same period: The standard core speech recognition engine still uses a cascade of probabilistic Markov models. This sort of language model is best suited to applications where the system needs to understand exactly what the user says. For office dictation, command and control systems, telephone banking services, and so on, it is important to capture the user’s spoken language verbatim, to be passed onto to word processor, database or other back-end system. However, in real life, people don’t usually use language so precisely. Much of our natural language use is in conversation: chatting without precise goals, where the hearer does not expect or need to capture and process our exact words and sentences. When having a friendly chat, or networking with current and prospective clients and contacts, we don’t have to understand exactly, to be understanding. Alternative language models are better suited to latching on to key cues and phrases, and filtering out the rest. Online chatbot engines such a Pandorabot.com use a different language model called AIML, which does not look for exact matches. To understand the difference between probabilistic Markov models and AIML, we have to go back a century. About a 100 years ago, a Russian mathematician called Markov had fun doing maths experiments with language. He experimented with text in books, counting letters, words, and in general things that come in sequences. He found a simple way to roughly predict the probability of a long sequence of letters: just take the probability of each letter in the sequence, and then multiply these together.