Facebook has tried to get the problem of fake or misleading content under control ever since it became a major controversy during the 2016 United States presidential election. Mark Zuckerberg’s social network announced in a company blog post on Thursday that it would extend its fact-checking efforts to photos and videos. It had previously been mostly relegated to fact-checking articles posted on the site.

The company said it would use a machine learning model to identify posts that may be false, based on user interactions with them. At that point, human fact-checkers will step in and work to appraise the photos or videos. Facebook gave a few different examples of posts that would get removed, like photos claiming the same “crisis actor” appeared at several different tragic events.

Facebook also promised this would not be the end of its efforts to curb fake news on the site.

“We know that fighting false news is a long-term commitment as the tactics used by bad actors are always changing,” Facebook’s blog post said. “As we take action in the short-term, we’re also continuing to invest in more technology and partnerships so that we can stay ahead of new types of misinformation in the future.”

The change came with just two months to go before the U.S. congressional midterm elections in November. Facebook has done a few things to try and crack down on election chaos, like removing troll pages with thousands of followers and increasing ad transparency. Mark Zuckerberg published a blog post on Thursday detailing the site’s efforts to protect elections.

Earlier in September, a former Facebook security executive said the site has still not done enough to ensure things will be okay during elections. Facebook has been criticized for its fact-checking process this week, as conservative publication The Weekly Standard is one of the only partisan outlets that is allowed to fact-check articles on the site.