Even people who make an effort to be inclusive, sometimes include loaded language. Classic texts, work emails and public debate commonly suffer from this problem. As Leaders, educators, parents and citizens, the inclusiveness of our language is important.
We created “Bias Catcher” as a tool to process a passage of text, an email, a newsletter, or a social media post. By highlighting isolating language, Bias Catcher can help people to identify unconscious bias in written communications.
With Bias Catcher, machine learning enables automated flagging of relevant content for review. We use a word vector model to find semantic relationships between input content and an initial set of keywords and phrases. The semantic distance algorithm allows this matching to detect words that may not be in the list, but have a close semantic meaning to a word in the list.
For example, the list flags that blonde is known to be sometimes used in an offensive context. The semantic distance algorithm therefore also detects that redhead might also raise the same potential concerns.
The way our books are read, the way our leaders speak to the people, the way businesses collaborate internationally. It all leads to a more inclusive communication platform, where no-one feels left out.
We have a very basic version of Bias Catcher live right now. We encourage you to try it for yourself below.
So much of what we write contains unconscious gender and other biases. This tool will help you to detect them. Please enter some text below to see an assessment.