Jeff Hawkins and Donna Dubinsky started Numenta with the intention of modeling and mimicing how the human brain processes information. It was an ambitious task, and one that has been nearly a decade in the making. But now, 9 years after it was started, Numenta has held an open house and spoken to members of the press about just how much progress they’ve made in their field.

What they’ve come up with is a neural architecture concerned adept at pattern recognition (much like the human brain), based on Hawkins’ theory of Hierarchical Temporal Memory. The theory states the brain has layers of memory which store information in time sequences- hence why we can remember song lyrics, for example. This theory provides the basis for Numenta’s Cortical Learning Algorithm (CLA), which underpins all of their technologies.

Speaking about the algorithm in an interview with VentureBeat, Dubinsky stated: “It’s modeled after what we believe are the algorithms of the human brain, the human neocortex. What your brain does, and what our algorithms do, is automatically find patterns.

“Looking at those patterns, they can make predictions as to what may come next and find anomalies in what they’ve predicted. This is what you do every day walking down the street. You find patterns in the world, make predictions, and find anomalies.”

Their first technology, Grok, was released earlier this year. Grok is concerned with pattern recognition in the IT environment. It is adept at detecting anomalies within computer systems, which could be technical faults or bugs. Knowing unusual behaviour is occurring on the system before it becomes a wide-scale problem means you can detect the root cause of the problem, often before the issue escalates.

Speaking about the future, Dubinsky seemed keen to impress that Numenta’s current application was merely one link in the chain. “It’s not especially tuned for IT services. It could be anything. It could be monitoring your shopping cart on the internet. It could be monitoring the energy use in a building. It’s anything that’s a stream of flowing data.

“The one that I’m excited about is the geo-spatial one, the idea of putting 2D and 3D data into the algorithm, finding patterns and making predictions. It’s about thinking about objects moving in the world. You know when they’re moving in unusual ways or ordinary ways. They can learn these ways, as opposed to having them programmed in. […]

“Text is another area I’m excited about. We’re doing some really interesting work with another group on figuring out how to feed text into these algorithms and find semantic meaning in text.”

Read more here.
(Image credit: Flickr)

Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!


Previous post

IBM Set to Invest $3 Billion in Next Generation Computer Chips

Next post

Open Data Institute Director Calls for More Open Data