Facebook (FB), Twitter (TWTR) and other social media companies can now be on the hook for fines of as much as $57 million for not removing hate speech within 24 hours after lawmakers in Germany passed a controversial law.

The Network Enforcement Act, otherwise known as the Facebook Law, passed in the parliament in Germany on Friday and is slated to go into effect in October. According to the law, if social media companies don’t remove content that is "obviously illegal," such as hate speech, incitements to violence, or defamatory posts within 24 hours they will face a fine that starts at $5.7 million but can go as high as $57 million. For more gray areas, the social media companies will have a week to determine if it's offensive and needs to be removed.

Proponents of the bill contend Facebook, Twitter, Alphabet’s (GOOG) and other social media companies haven’t done enough to stem the tide of offensive content that permeates most of these platforms. Facebook has been under attack ever since the election in the U.S. with critics contending the platform influenced it with fake news. There’s also concerns these platforms are being used by terrorists groups that send out propaganda and content aimed at inciting people to engage in violent acts. On the other side are digital activists who argue the law is an infringement on free speech and places too much responsibility in the hands of the technology companies. (See also: Facebook Declares Total War on Fake News)

In an address, Justice Minister Heiko Maas said that while freedom of expression is a “great asset”  it ends where criminal law begins. “Experience has shown that, without political pressure, the large platform operators will not fulfill their obligations and this law is therefore imperative,” Maas said.  In a statement to The Verge, a Facebook spokesperson said: “We believe the best solutions will be found when government, civil society and industry work together and that this law as it stands now will not improve efforts to tackle this important societal problem. We feel that the lack of scrutiny and consultation do not do justice to the importance of the subject. We will continue to do everything we can to ensure safety for the people on our platform.” (See more: Facebook Praised by EC for Quashing Hate Speech)

With criticism being lobbed at the social media companies, they have been taking steps to combat the problem. Facebook has plans to hire 3,000 more people in the next year to pour over and flag offensive content. The 3,000 are in addition to the 4,500 Facebook already employs to review posts that could violate the social network operator’s terms. It also announced it's using artificial intelligence to find content linked to terrorism immediately. For example, the company said it is deploying image-matching technology that automatically looks for matches of known terror-related photos or videos when someone tries to upload the content. That means that if Facebook had previously removed an ISIS propaganda video, the system prevents others from uploading the same video. AI is also being used to understand text that may call for or advocate terrorism. Google’s YouTube has also been taking steps to clean up its platform and now ensures advertisers ads don’t show alongside controversial content and been has been taking down more of the content that is offensive.

Want to learn how to invest?

Get a free 10 week email series that will teach you how to start investing.

Delivered twice a week, straight to your inbox.