YouTube, the world’s largest video sharing platform is in a state of turmoil as its largest source of revenue — advertisers, are pulling their ads off its videos in large numbers. The issue was flagged on Friday, when the U.K. government pulled its ads off with several large-scale U.S. advertisers such as Verizon and AT&T following suit.

Even as Google issued an apology for the same Tuesday, the problem does not seem to recede. On Wednesday, the New York Times reported that one of the largest FMCG companies, Johnson and Johnson was also pulling its ads off YouTube for the time being.

Read: UK Government Pulls YouTube Ads Over Offensive Content

While the ad boycott could affect YouTube’s revenues, it also poses questions at the website's financial model and its content monitoring policies.

YouTube lets users post video content on the website and places ads on them. The revenue generated is shared between YouTube and the content creators.

With thousands of users working to create YouTube content and millions of videos being posted on the platform, the monitoring is definitely not easy for tech giant Google. The company's existing policy of investigating videos, once it is flagged for offensive content, doesn’t seem to be working either.

“We’ve begun an extensive review of our advertising policies and have made a public commitment to put in place changes that give brands more control over where their ads appear. We're also raising the bar for our ads’ policies to further safeguard our advertisers’ brands,” Google said Wednesday in a statement accompanying its apology.

Google generally positions itself as a neutral platform for technology, but the appearance of hate content could be detrimental to both users and the advertisers’ image. With highly automated operations, content monitoring, especially on the scale needed for YouTube, Google has cited no solutions currently. The company has only responded with a promise to give advertisers better control over where their ads appear and help make it easier to blacklist content.

The interpretation of what does and doesn’t constitute extremist content could itself pose a problem. For example, white supremacist content including some associated with the Ku Klux Klan has started appearing more regularly on the video sharing website recently.  But, what if a video satirizing the KKK mentions them? How YouTube will differentiate between the two is not yet known.

Read: YouTube Apolgizes Over Censoring LGBT Content In Restricted Mode

YouTube has experimented with content moderation in the past too. It offers a "Restricted Mode", which restricts content not suitable for children. But over the last week, LGTBT YouTube creators have started complaining that their content was being censored and removed from restricted mode.

On Monday, the company issued an apology to the LGTBT YouTube creators with a promise to fix it.

The instance represents how content monitoring represents a quagmire for video sharing site and all social media in general. The problems faced by YouTube are not unique — Twitter introduced new changes to its platform to curb abuse in February.

How social media websites monitor such content remains to be seen, since the issue has been brought to the fore and especially since it has started affecting their revenues.