YouTube video-recommending algorithm has been accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory.
Google sporadically responded to negative publicity flaring up around the algorithm’s antisocial recommendations announcing a few policy tweaks or limiting the odd hateful account. It is not clear how far the platform’s tendency for promoting unhealthy clickbait has been rebooted.
A majority of 71% of all the reports came from videos recommended by YouTube algorithm and recommended videos were 40% more likely to be reported than intentionally searched-for videos, according to the report. Reported videos also outperformed other videos, acquiring 70% more views per day than others watched by volunteers.YouTube said videos promoted by the recommendation system result in more than 200 million views a day from its homepage, and that it pulls in more than 80 billion pieces of information.
The company said that they constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Credited to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.YouTube is the second-most popular website in the world, after its parent company, Google. Yet little is known about its recommendation algorithm, which drives 70% of what users watch. Nearly 200 videos recommended to volunteers were eventually removed by YouTube. These videos had a collective 160 million views before their removal.