Following criticisms over hosting child abuse videos and other content of offensive nature on YouTube, Google announced Monday that it was going to hire thousands of new moderators.

Google is the parent company of YouTube. According to the announcement, next year, the company would increase its workforce to more than 10,000 moderators, who would review potential content that may violate the company’s policies. In November, YouTube faced increased scrutiny following reports that said the company permitted inappropriate content to get past the YouTube Kids filter. The filter’s purpose was to block contents that are deemed inappropriate for young users.

Also, there were reports about videos featuring popular children’s characters in violent or sexually suggestive scenarios hosted on YouTube Kids. The problem went further when it was discovered that even 'verified' channels on YouTube hosted child exploitation videos. Among these were disturbing vignettes — like footage showing screaming kids getting mock-tortured — that went viral.

To curb such issues, YouTube is now apparently taking a firmer stance. According to YouTube’s CEO Susan Wojcicki, apart from bringing in more human moderators, YouTube was also developing advanced machine-learning technology that could flag inappropriate content automatically. This development is a continual process, said Wojcicki.

“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” Wojcicki wrote in a blogpost.

Going by her words, about 2 million videos featuring extremist content were reviewed manually since June. This human-reviewing also helped train machine-learning systems to pinpoint footage of a similar kind.

Susan Wojcicki Susan Wojcicki- Youtube's CEO Photo:

Even though the recent problems brought to fore the limitations of algorithms in finding and removing problematic content, Wojcicki mentioned in her statement that YouTube would go on relying heavily on machine learning. This was considered necessary due to the large user base on YouTube as it was impossible to manually monitor all the content (every year, 46,000 years' worth of content is watched by users on YouTube).

Wojcicki said YouTube had used in recent weeks machine language technology to aid human moderators in identifying and shutting down hundreds of YouTube accounts and remove thousands of comments. As per YouTube, with the aid of machine learning, their human moderators were able to remove almost five times the videos than before. Also, algorithms flag 98 percent of the videos that are removed because of violent content.

Another reform mentioned in the statement was regarding YouTube’s advertising policies. A stricter criteria for choosing advertisements, roping in more ad reviewers and also performing more manual curation were mentioned.

This comes in the backdrop of a number of brands suspending their spend with Google and YouTube after reports surfaced in November stating that their ads were placed along videos that had comments of a lewd or exploitative nature — particularly videos meant for children.

In fact, YouTube’s problems with ad placements could be traced further back. In March, quite a few brands removed their ads from the video sharing platform once it was learned that the ads were linked to videos with extremist content.