Toxic comment is very harmful to a health community but we don't want moderator to spend a lot of efforts to maintain it. Thus we built a machine learning system to detect toxic comment, if it is toxic it'll be blocked by our system by default.
There is a blocking popup that appears when a user posts a toxic comment.
We use machine learning technologies to detect toxic comments. Currently, we support the following toxic labels:
- severe toxicity
- identity attack
- sexual explicit