A lot of companies — Facebook, Google and Microsoft included — have, in recent years, begun working in earnest to incorporate “deep learning” algorithms into their software that are used to recognize objects in images, faces in photographs and spoken words. Last month, Google open sourced “SyntaxNet” — an Artificial Intelligence (AI) system that, through the use of deep neural networks, aims to read and understand human language in order to process it and derive real meaning.

In another step toward creating a deep learning-based AI capable of grasping the way humans talk with a near-human level accuracy, Facebook on Wednesday unveiled “DeepText” — a natural language processing engine that comprehends the textual content of several thousand posts per second, spanning more than 20 languages.

“Traditional techniques require extensive preprocessing logic built on intricate engineering and language knowledge. There are also variations within each language, as people use slang and different spellings to communicate the same idea,” Facebook explained in its blog post. “Using deep learning, we can reduce the reliance on language-dependent knowledge, as the system can learn from text with no or little preprocessing. This helps us span multiple languages quickly, with minimal engineering effort.”

Currently, a major challenge researchers face vis-à-vis creation of truly human-like AI is the inability to create algorithms that can understand subtle nuances in language. For instance, if an AI can be programmed to understand the difference between “I need a ride,” “I just finished a ride,” and “I found a ride,” it would know when to call a cab and when not to.

This is the discrepancy Facebook hopes to resolve using DeepText. The AI would not only understand the words individually, it would also comprehend them in the proper context, thereby detecting the “intent” behind those words.

“For example, someone could write a post that says, ‘I would like to sell my old bike for $200, anyone interested?’ DeepText would be able to detect that the post is about selling something, extract the meaningful information such as the object being sold and its price, and prompt the seller to use existing tools that make these transactions easier,” Facebook said in the statement.

The guiding principle behind deep learning-equipped neural nets is simple — create a network that mimics the way we believe information is processed by neurons in our brains. In practice, however, it is anything but, not least because our understanding of how neurons store information is, at best, sketchy.

“It’s not a true reconstruction of what neurons do. But it’s an abstract notion of how we believe neurons work in the brain,” Jeff Dean, a computer scientist who heads the Google Brain project, recently said during an interview with the Wired magazine. “If you have lots and lots of these neurons and they’re all trained to pick up on different types of patterns, and there are other neurons that pick up on patterns that those neurons themselves have built on, you can build very complicated functions and systems that do pretty interesting things.”