social network

Neuronet was weaned from sexism

Terminator 2: Judgment Day / Carolco Pictures, 1991

American researchers have developed a method that, in their opinion, will help to relieve the automatic recognition of images from prejudice against sex. Their algorithm works by using restrictions that are imposed on the operation of a neural network, and can reduce “sexism” by 47.5 percent. The article is available for reading on the website of the University of Washington.

Automatic recognition of images with subsequent language markup is used in many technologies, including modern smartphones. All computer vision systems, however, have a significant drawback: when learning on a certain, often limited, sample of images, the neural network can work incorrectly and biased against individual representatives of society.

The authors of the new work checked the work of two algorithms: MLC, which marks the objects of the image, and vSRL, which assigns to the image objects their semantic roles and actions. The results of the work of these two neural networks have shown that 45 percent of all actions and 37 percent of objects are shifted toward a gender more than twice as much in recognition. Analyzing the images databases used in the training, the researchers found that imSitu, which is used in the training of vSRL, the number of images with cooking women is twice the number of images with men in the same situation. This leads to the fact that only 16 percent of all images with cooking people are attributed to the agent-Man. Because of this bias towards a single gender, a neural network may not work correctly.

To solve this problem, researchers developed an algorithm that relieves artificial intelligence from “gender bias” when recognizing images. This algorithm is based on the restriction that objects and actions on images in the training sample are related to a subject of a certain gender not more often than in the training sample. That is, if the number of images of men in the kitchen in the training sample is equal to the number of images of women in the kitchen, then the neural network should assign the man as often as the woman to the “cook” or “spoon” thing.The algorithm works by the method of limited optimization . First, the algorithm calculates the bias towards one of the sexes, based on the distribution of objects of the same sex in the training sample and the same distribution in the results of its work. If the offset exists, then the algorithm imposes corresponding restrictions on its operation. As a result, researchers have been able to reduce gender bias in the work of the neural network, which spells out the semantic roles, by 40.5 percent, and the neural network, which marks objects in the photograph, by 47.5 percent.

The authors of the paper assert that their algorithm can significantly reduce bias in relation to sex, which can manifest itself in the work of a neural network trained on a certain sample. The restriction algorithm does not reduce the accuracy of image recognition.

Developers of computer vision systems use other methods to optimize the work of their algorithms. So, in our note you can learn about how researchers solved the problem of incorrect recognition of photos of persons covered by hand.

Back to top button
Close
Close