Fake news spreading via social media became a major talking point surrounding the 2016 election, and Facebook was at the center of it. After weeks of inaction, the company has officially laid out a plan for how it will address false stories shared on its platform.

The plan–presented in a blog post by Adam Mosseri, the vice president of the company’s News Feed product—includes four primary efforts that will aim to remove fake news and halt the spread of bad information.

<iframe src="https://player.vimeo.com/video/195753689" width="640" height="360" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

First on the list of Facebook’s changes is to make reporting fake news easier. Users will now be able to click on the arrow in the upper right hand corner of a post. A drop down menu will appear, with a new option that allows users to report a story as fake.

While this change will empower users to police their news feeds, it also has the potential for abuse. When actually flagging fake stories on site—the second part of Facebook’s initiative—the company will tap third-party fact checkers for help.

Facebook is teaming up with signatories of Poynter’s International Fact Checking Code of Principles —a list that includes ABC News, FactCheck.org, Snopes, Politifact, the Associated Press, and the Washington Post in the U.S.—to provide a dose of reality to fake news.

When one of these source disputes a story, any post containing a link to that story will be flagged. A banner with a red exclamation mark will alert anyone seeing the link in their feed the story has been disputed by fact checkers.

When attempting to share one of these flagged stories, users will be given a warning about the story’s accuracy. However, they will still be able to share it.

Facebook previously blocked a third-party browser plugin that provided a similar service, though later clarified it was due to the plugin’s domain name and not the service it was providing.

Facebook is also testing machine-learning powered systems for preventing the spread of fake news. It is working to incorporate a new element to its algorithm that will punish fake stories by indicators of how people interact with it. According to Facebook, if reading an article makes people significantly less likely to share it, it may be a sign that a story has misled readers.

The social media company moved primarily to an algorithm-powered system for its trending news feature following accusations that human intervention was censoring right-wing publishers.

The final element of Facebook’s effort to curb fake news is to go after the financial incentives, as fake news proved to be quite profitable for some. The social networking company is eliminating the ability to spoof domains to cut back on sites that pretend to be real publications. Stories flagged as fake will also be prohibited from being used as an advertisement or promoted.

The effort to address fake news comes despite company CEO Mark Zuckerberg being rather dismissive of the problem in the past. He went on record saying it’s a “ pretty crazy idea ” to think fake news on Facebook had any impact on the 2016 election. Zuckerberg also said 99 percent of posts on the platform he created are authentic.