IBM’s Watson had beaten the humans on Jeopardy didn’t really come as a really big surprise for me. It’d been coming since Kasparov left the room in tears after losing to Deep Blue.
The argument then was that chess is about finite number of possible moves. The use of intensive mathematics, brutal processing logic and speed make chess a well defined challenge – computers were appropriately designed for such a challenge. However, natural language is very different. Modeling natural language mathematically is very challenging, and at the time (of deep blue vs. Kasparov), even natural language processing researchers admitted we were many years yet before computers would understand queries and respond to them in human language. I’ve banged on about intelligent personal learning agents based on semantic technologies in the past, and Watson – a ‘natural language processing’ ‘pattern recognizing’ ‘world aware’ engine – is a huge step towards making that happen.
Ray Kurzweil calls the Watson Jeopardy match-up a “milestone” in the progression of machines to achieving human intelligence , which will be reached within 20 years, he believes. While I don’t agree with a lot of things Kurzweil says because they don’t necessarily seem scientific, I’d agree about machine intelligence rapidly evolving and overtaking human cognitive abilities. In my opinion, computers with software/hardware that support true natural language processing abilities coupled with augmented reality interfaces can change the learning landscape; or to be more precise, the way performance support is perceived and used in the workplace. At this time, performance support tends to be viewed to happen in discrete episodes. Ex., I refer to a quick programming guide when configuring a DVR; or use a calculator while doing math. But in a world with wearable interfaces and ubiquitous computing, it’s easy to imagine a situation where data about the workplace environment is constantly available and updated in real-time; this changes how we view performance support itself.
Couple this sort of wearable, connected interface with an intelligent software agent that can make sense of its environment, has constant access to large amounts of information about the environment, and understands and can respond in natural language is the digital assistant that I dream of. I’d like it to be able to do five things initially:
- Answer questions – Simply answer questions based on its best judgment and confidence using the internet’s data space and sets; and respond in human language with a concise but wide-ranging answer.
- Do research – find patterns, trends and be able to make effective recommendations for tasks in the workplace based on pertinent internet data sets.
- Communicate, negotiate schedules – be able to communicate with other individuals ‘personal computing agents’ to exchange information, and schedule meetings/calls and the like.
- Assist Decision making based on large sets of existing data – there are situations in which humans need to make quick decisions without the benefit of prior experience; in such situations, the agent would be able to determine the best possible decision based on past experience (internet data) and take corrective action. While this sounds a bit scary, being able to capitalize on the experience of others in situations similar to ours is very useful for learning.
- Digital memory/stream – the agent will constantly monitor all life activities and keep a record of decision, situations encountered and the environmental variables at the time. This stream of data will constantly be referenced and will serve for further decision making on part of the agent. Additionally, when stripped of identifiable personal data, these streams can help support other intelligent agents in making decisions.
Put together, these five would probably amount to a super duper performance support system. Available at all times, usable in all contexts, with situational awareness, access to vast quantities of information, and human language cognition abilities to interpret that data – it will make an awesome learning tool.
I will probably be able to tell my agent in the future “tell me how this machine works” or “I want to do ‘this’ with the machine” and the agent would be able to create a concise summary from internet data about that particular machine, that is human understandable and I could use to act immediately. Extend this type of questioning and response from the agent into a hundred workplace situations and you’ll find it applies equally well.
Advances like the one IBM’s Watson demonstrates and increasingly miniaturized wearable interfaces, and ubiquitous computing will change learning, sooner than we imagine. I for one, can’t wait.
P.S. You can read a succinct note about Watson here