Point / Counterpoint: Protecting Minnesota children online could worsen the Internet and social media for all

Point / Counterpoint: Protecting Minnesota children online could worsen the Internet and social media for all


At this session, the Minnesota Legislature quickly passed a bill to protect children online. However, the proposal would probably bother internet users and encourage unnecessary data collection.

Digital ecosystems pose real and significant risks to children, who must avoid threats such as obscene content and sexual exploitation. In Minnesota, state lawmakers tried to solve this problem by focusing on algorithms. The legislation they proposed (HF 3724 and S 3933) would prevent social media platforms from using algorithms to “target user-generated content to account holders under the age of 18”.

While well-intentioned, these efforts would complicate the social media experience for all users – without significantly improving security.

Representative Kristin Robbins, a House law sponsor, was called upon to take action after reading a series of unofficial reports about TikTok videos and their impact on adolescent mental health.

“Social media algorithms” may seem like an ominous concept; but in reality, these are just rules that help rank content by relevance. Not every platform works the same way and there is no singular algorithm. Every social networking company classifies and prioritizes content differently, so the metrics used by TikTok are different from Facebook or Instagram.

A broad focus on sorting mechanisms is in principle misleading. Algorithms, while imperfect, allow social media companies to sort millions of images, videos, and comments posted every day and show users what they might be interested in.

However, the proposed legislation does not take this nuance and usefulness of algorithms into account. It would include any “electronic medium” that allows users to create, share and view user-generated content “. Although Deputy Robbins may be trying to target companies like TikTok and Instagram, the inclusive language of her design would imply sites like LinkedIn, which targets professionals, not teenagers. LinkedIn Child Protection, among many other sites with predominantly adult users, is unlikely to bring significant benefits to teens’ mental health.

In particular, it would place a significant burden on the segment of the population that legislators are trying to protect. Banning algorithms simply means that Minnesota people under the age of 18 would have to filter content themselves. Offensive content would still be there, just embedded in other posts. Social media would basically resemble a large pile of unsorted cards. While a teenage user could tag unwanted photos and videos, thereby “teaching” the algorithm that he or she does not want to see such content, the law would require all posts to be displayed.

In addition, in order to determine if anyone in Minnesota is under the age of 18, social media companies would be forced to collect a wealth of personal information from all users. To meet the accounts, companies would need to confirm the age and location of all users. This raises significant privacy concerns, especially for human rights activists, political dissidents and journalists, who often rely on anonymity to be safe. As the Wall Street Journal noted, such measures would also disadvantage groups with poorer access to identification.

This is not the first account of its kind. He joins the litany of state content moderation laws introduced by lawmakers on both sides of the alley over the past few months.

Following the attack on the Capitol on January 6, social media platforms tightened their moderation practices to the dismay of conservatives, who saw the event as a form of censorship. Although these laws have been successfully signed in Florida and Texas, they are being challenged in court on constitutional grounds. At the federal level, draft laws on moderating child-centered content, such as the EARN IT Act, have garnered criticism from technology experts, who say such proposals would restrict legal freedom of speech and invade privacy.

Banning automated sorting mechanisms and introducing verification requirements would do little to solve the problems that afflict children online. However, they can have unintended consequences for other vulnerable groups and jeopardize user privacy. Conversations about child safety deserve a bigger and more thoughtful discussion in the end – not quick solutions that would worsen the Internet for all.

Rachel Chiu is a contributor to Young Voices (young-voices.com), a non-profit talent agency and PR firm for writers under the age of 35. Follow her on Twitter: @rachelhchiu. She wrote it exclusively for the News Tribune.

Rachel Chiu





Source link

Leave a Comment

Your email address will not be published.