Twitter announced Wednesday that it will introduce “safety mode,” a new feature that will allow a user to autoblock an account for several days for any explicit comments made towards them and their tweets.

“Safety Mode is a feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions,” the press release explains.

Twitter has been taking steps to tackle its harassment issues, along with new rules focusing on “abusive behavior." Actions are immediately reviewed once any tweet is reported or if it seems as if an individual is being targeted.

The social media platform says that it wants Twitter to be a place for conversation but sometimes this can be interrupted by “dogpiling and harassment.”

“Unwelcome Tweets and noise can get in the way of conversations on Twitter, so we’re introducing Safety Mode, a new feature that aims to reduce disruptive interactions, ” said the press release.

The new feature will give users the option to automatically block spammy or abusive replies once safety mode is turned on. The feature can be accessed through settings.  

The new safety feature is in beta testing and will be evaluated by a feedback group.

“We want you to enjoy healthy conversations, so this test is one way we're limiting overwhelming and unwelcome interactions that can interrupt those conversations. Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks,” from the press release.