A group of German computer scientists has developed a sophisticated software that uses artificial intelligence to “look” into the future.

While sci-fi movies have painted “looking into the future” as a situation in which a piece of technology could exactly reveal what is going to happen, the reality is different and revolves more around action anticipation – analyzing a chain of events to predict an upcoming move well in advance.

The idea is pretty similar to how predictive text system on phone recognizes writing pattern to suggest what words you are going type next or how we humans can tell what is going to happen next after seeing first few steps of an activity we already know.

This sounds pretty simple to us, but for machines, learning such an ability is relatively complicated. Most systems of this kind have predicted only a few seconds of future activity to date.

However, that may soon change because the self-learning software, developed by the research team at the University of Bonn, Germany, can look a few minutes ahead to predict an activity and its duration – a development that would certainly push the boundaries of the technology.

The group trained the system with 40 videos showcasing several steps of preparing different recipes. Each clip, approximately six minutes long, demonstrated 20 different actions and included information about the time taken to perform each specific action.

When the team came up with a separate new set of clips showing a man preparing a similar meal and showed a small portion of the video’s beginning, the system guessed the upcoming steps using its learning and delivered extremely impressive results.

The technology did better than other systems in this arena, but as the researchers described, it is just the first step in the field of action prediction. In the future, they hope such a system like this could be developed into one wherein machines would be able to predict an action even before a person thinks of performing it, just like a loyal butler who sense his master’s need. This could aid the field of robotics and allow machines to guide their human counterparts while cooking or performing critical surgeries where missing an important step could cost a person’s life.

The research team will present its work on the system at this year's IEEE Conference on Computer Vision and Pattern Recognition in Salt Lake City, to be held June 19-21 in Salt Lake City.