[ad_1]

It is not necessarily surprising that these videos become news. People make videos because they work. For many years, gaining opinions has been one of the more effective strategies for pushing large platforms to fix certain problems. Tiktok, Twitter, and Facebook make it easier for users to report abuse and violations by other users. However, when these companies seem to violate their policies, people usually find that the best way forward is to try to publish information about them on the platform, hoping to spread and attract attention, so as to reach a certain solution. For example, Tyler’s two videos on the Marketplace profile have both been viewed more than 1 million times.

“The content is flagged because they are from marginalized groups and they are talking about their experiences with racism. Hate speech and talking about hate speech look very similar to algorithms.”

Casey Fiesler, University of Colorado Boulder

Casey Fiesler, an assistant professor of research on technology ethics and online communities at the University of Colorado at Boulder, said: “I may be tagged once a week.” She is active on TikTok and has more than 50,000 followers, but even though it is not what she sees Everything seems to be a reasonable concern, but she said the general problems of the application are real. There have been several such mistakes in the past few months, all of which have had a disproportionate impact on marginalized groups on the platform.

MIT Technology Review I have asked TikTok for every recent example, and the response is similar: After investigation, TikTok found that the problem was caused by mistake, emphasized that the blocked content did not violate their policy, and provided a link to support the company to such group.

The question is whether this cycle—some technical or policy errors, viral responses, and apologies—can be changed.

Solve problems before they arise

“There are two hazards to this possible algorithmic content review that people have observed,” Fiesler said. “One is a false negative. People are like,’Why is there so much hate speech on this platform and why hasn’t it been deleted?'”

The other is false positives. “Their content is flagged because they are from marginalized groups and are talking about their experiences with racism,” she said. “Hate speech and talking about hate speech look very similar to algorithms.”

She pointed out that both categories hurt the same person: those who are targeted for abuse will eventually be subject to algorithmic scrutiny for speaking about it.

Douyin The mysterious recommendation algorithm is part of its success——But its unclear and constantly changing boundaries have had a chilling effect on some users. Fiesler pointed out that many TikTok creators self-censor the text on the platform to avoid triggering comments. Although she wasn’t sure how effective this strategy would be, Fielser started to do it herself, just in case. Account bans, algorithmic mysteries, and strange abstinence decisions are frequent parts of app conversations.

[ad_2]

Source link

Leave a Reply