Hate speech is any expression of hate towards a person or a group of people based on who they are, their identity or certain personal attributes. It can be communicated in person – at university, on the bus, on the street; or online – via social media, blogs, websites and emails.
Online hate speech is especially problematic since it can be done anonymously, it can spread quickly, and many people believe their online words may not have real-life consequences.
But online hate speech does have real-life negative consequences. It can lead to depression or suicide, promote the use of violence, encourage discrimination, and increase societal divisions. What’s more, for witnesses it’s disturbing to read and makes our online life unpleasant.
There is no universally accepted definition of hate speech. It can differ by country, online platform or organisation with some applying a broad catch-all definition, while others apply a narrower definition.
Hate speech refers to content that promotes or condones violence or hatred against individuals or groups based on certain attributes, such as race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/ gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics.
“Yes they’re evil. Let’s kill them all” – A tweet by a US TV commentator referring to Muslims
What does online hate speech look like?
- Threats of violence (such as death threats or threats of rape)
- Racial or ethnic slurs
- Symbols of hate such as swastikas
- Encouraging others to harass someone online because of their identity
- Gay bashing or insulting someone due to their sexuality or gender identity
- Xenophobic comments telling immigrants and foreigners to leave the country
- Images or videos with the intention to insult or degrade a particular race, religion, nationality, or gender-identity