One of the issues I faced when setting up this system was what I should class as an abusive message.The obvious answer was to check the input against a list of keywords.If anyone was being genuinely mean, her sarcastic replies often shocked them and many suddenly started being friendlier towards her: As a bonus, people were sharing Mitsuku on various internet forums, as it was fairly unique for a bot to stand up for itself.I never expect people to say please and thank you to Mitsuku but it’s nice to see them treat my work with at least a little respect.Maintaining a chatbot is a mixture of programming and creative writing and it’s great to see abusive users say how much they enjoy talking to Mitsuku, especially if they then change their ways.Some of Mitsuku’s most regular visitors are the ones who started talking mean to her but now treat her as a friend.One of the biggest issues faced by a chatbot developer is how to deal with abusive visitors who enjoy cursing or talking about adult topics with your bot. Abusive messages, swearing and sex talk account for around 30% of the input received by Mitsuku.
I was happy for her to insult them and be mean but I didn’t want her to use adult language, as a lot of schools and young people talk to her.Mitsuku gives a warning, produces a suitable reply and then classes them as an abusive user: The ADDINSULT category does three main things.It keeps a count of how many times the user has called the category and bans their customer ID number when it reaches five.However, this approach is flawed, as some of the keywords can be used quite innocently.For example, let’s take one of the most popular abusive keyword that Mitsuku receives (sex):(genuine)I don’t want to warn the second user, as his question was innocent.
Search for Sexy chatbot:
After careful consideration, I decided to remove the banning system, as the number of visitors to to fall, which also took a hit to my advertising revenue.