social network

Scientists in the US have taught artificial intelligence to anticipate disputes on the Internet

But people still do it better.

Researchers at Cornell University, the division of Google Jigsaw and the Wikimedia Foundation have created an algorithm that scans the discussion and determines how they end – a dispute or a friendly conversation. The system was trained on the “Discussions” to articles on “Wikipedia”, where the editors discuss edits in the materials and the need to update the sources.

The algorithm was pre-programmed to look for certain signs in the conversation, which affect the tone of the conversation. For example, if the discussion goes well, there are gratitudes (“Thank you for your help”), greetings (“How is your day going?”) And the word “please”.

According to the researchers, all this creates not only a friendly atmosphere, but also a certain emotional “buffer” between the participants. In such a discussion, anyone can admit that he is wrong, without losing face.

The signs that the discussion of edits will turn into a dispute, scientists consider the recurring direct questions (“Why is it not mentioned? Why did not you look at it?”) And the proposals that begin with the second person pronouns – you or you (“Your sources are not have meaning “). A particularly unfriendly sign is the transition to individuals at the very beginning of the conversation.

In addition to these markers, scientists have determined the general “toxicity” of discussions using the Google Perspective API, a tool with artificial intelligence that determines how friendly, neutral, or aggressive the text is.

At the end of the “training period”, the algorithm could predict the outcome of the dialogs with an accuracy of 65%. Scientists believe that the result of their work will be machines that will be able to interfere in Internet discussions and prevent disputes.

People can suspect in advance that conversations will end badly and this study shows that we can teach computers the same way.

Justin Zhang
one of the students working on the project

However, as noted in the publication of The Verge, the algorithm has serious drawbacks. For example, accuracy – the result of the computer is still lower than in people who could predict the outcome of the dialogue with an accuracy of 72%. In addition, the system was taught at non-typical for the Internet discussions, whose participants have a common goal – to improve the material.

There are also ethical issues: the algorithm can affect potentially constructive discussions if it makes a decision too early. Scientists do not yet know where the necessary edge passes, when the machine should intervene.

In addition, according to Zhang, the dialogues can be completely unpredictable: she saw examples of discussions that were all “toxic” by all indications, but in the end people were corrected and switched to polite tone.

Back to top button