Johan Broddfelt

Lessons from building a spam filter

We all want happy customers, but sometimes they have a bad day and that might affect you in more ways than you think. You might loos them as a customer but even worse, they might write about you in social media and tell all their friends about the way you treated them.

To start you need to define what you want to do. In this case it was to find negative language and spam in comments on the website CityPolarna.se. Then you must find the data and structure it in a way that make it easy to process by an algorithm. In this case that task was quite easy because all the data could be found in two tables in the database. But there were some issues with strange characters, bad spelling and missing spacing between words. The next step was to analyse the data to see if you can identify what you are looking for manually in the data. In this case that was done in cycles between running the algorithm and the looking at the data found. Many times, it turned out to be comments labelled incorrectly, that caused most of the issues but things like irony and just excessive use of numbers and exclamation marks also caused the algorithm to flag the comment as negative. When splitting the sentence up into smaller pieces we could better identify specific issues caused by bias. For instance, was Stockholm used more in negative comments and was therefor always turning the classification of a comment toward a negative score while Malmö was associated with more positive comments. Bias is one of the more dangerous parts of machine learning, because it can influence the decisions a model does in ways that are hard to predict. But it is not only the data that can make a model biased. Me as a developer also influence what I consider to be a negative comment and not when I manually adjust the labels of comments when analysing the result of the output from the model. What I classify as negative might not be considered as negative by other people. So, we need to be careful when we put more and more trust in these systems.

When the model does what it is supposed to we can start to integrate it to our solution, there it is a good idea to incorporate some feedback mechanism that can produce more labelled data to train the model on in the future. In this case, when reporting a negative comment it would be a good idea to add a simple button that could verify or correct the classification of the comment so that we can use it as a label in the future.

- classification, spam filter, rnn, software

<< Rock, Scissor, Paper neural net
Follow using RSS

Comment

Name
Mail (Not public)
Send mail updates on new comments
0 comment