By Marwa Azelmat
Look around you in any public place, and you’ll probably spot at least one young person entirely fixated on their phone. Although older generations tend to be disparaging about digital immersion and the disconnect it seems to create between us and the real world, the fact is that young people’s thirst for information is driven by a need to explore and discover their place in the world. Young people always have been, and always will be, curious.
The same process of exploration has been playing out offline for millennia. In the old days, adults had a clear set of rules to keep kids away from physical harms, like age limits for driving or curfews. In the digital age, things are a bit more complicated. The internet is always changing, and young people are usually the first to catch on. As parents and lawmakers hustle to figure out new safety rules for the online world; there’s a powerful wave of change coming from the youth themselves. They want a say in the rules that affect their online lives.
Unpacking the complexity
Few would argue that adolescent safety should be left up to the platforms. One notable example is the approach of YouTube in managing content accessible to adolescents. Despite having community guidelines and age restrictions in place, YouTube has faced criticism for not adequately filtering inappropriate content. In 2019, the platform was fined $170 million by the U.S. Federal Trade Commission for violating the Children’s Online Privacy Protection Act (COPPA) by collecting personal information from minors without parental consent.
But at the same time, we have to recognize that the platforms offer a service that young people want. If excessively strict rules get in the way of young people having pleasurable online experiences, a majority will find ways to circumvent them. RNW Media’s position is that human rights have to work in the real world. So, just like solutions to offline youthful exuberance have always had to balance freedom and risk, online solutions have to be rooted in a digital experience that still works for young people.
So how do we get there?
Imagine governments, social media companies, researchers, rights specialists, parents, and young people themselves working together to create strong online safety mechanisms. Successful case studies from the UK, Australia, and Scandinavia show that this is the most constructive way to move forward (see below). But these case studies also show that this process can only happen if it comes together with legislation that has sharp enough teeth to ensure the platforms comply.
The case of TikTok, a popular social media platform, is instructive. After Tiktok was fined USD 5 million in the United States for harvesting data from underage users without proper consent, the platform implemented more stringent age verification processes, including AI-based systems and random age checks. The platform also introduced a separate, more controlled environment for users under 13.
Global platforms, local taboos
Globally, the legal framework for protecting adolescents online is varied and fragmented. This is to be expected, given that different cultures have vastly different norms and taboos around what is acceptable in public media. But when it comes to multinational online platforms that cater to global audiences, there are no universally accepted legal standards on safeguarding young people’s rights – and few rules on enforcement or compliance.
As often happens when it comes to global rights issues, the concept of universality quickly starts to get complex. Many progressives and left-wing users might agree with the idea that young people should be allowed to access content with information on sex and sexual health. Such social attitudes and perceptions reduce barriers to policymaking, making the process of passing legislation relatively simple. Entirely different norms exist elsewhere. And here is where questions of age verification and the right to access become even more complex.
Digital platforms could and should offer young people opportunities to access information more freely. But if age verification involves parental consent – particularly in places where patriarchal systems tend to limit people’s rights – opportunities for freedom quickly start shrinking. Mechanisms for parental consent should be designed so they do not limit young people’s agency and autonomy.
Almost all social media platforms are now experimenting with age-verification methods to prevent underage users from accessing inappropriate content. These range from simple self-reported age entry – notoriously easy to cheat – to more sophisticated measures such as AI-based facial recognition or document verification – which raises privacy concerns and can still be circumvented.
The challenge lies in developing a verification system that is both effective in protecting adolescents while also being respectful of their privacy and autonomy.
Protecting adolescents’ rights online – one size doesn’t fit all
Given the complexities of online adolescents’ protection, a one-size-fits-all approach is insufficient and tone-deaf toward the multilayered nature of adolescence and its cultural, economic, and social diversity. Youth is not just an age bracket, it is a political space. And as with all political entities, it needs to be part of the dialogue. A multi-stakeholder approach to this issue should involve:
1. Governments: National governments should play a leading role in establishing clear and consistent legal frameworks that align with international standards such as the guiding principles on business and human rights (UNGPs).
But they need help from researchers and rights experts to create policies that work.
They should also invest heavily in programs that empower adolescents to navigate the online world safely. Whatever safety measures are introduced, they will never be perfect, and young people will be able to access problematic content. They need to be taught how to use the internet in a way that works for them.
2. The private sector: Companies, particularly social media platforms, must be at the table. Their participation is key in protecting the safety of their adolescent users without eating a chilling effect on their freedom of expression.
Companies must be pressed to invest in safer technology, enhancing age verification methods, and ensuring that their content moderation policies are robust, transparent and effectively funded, staffed and implemented.
3. Standard Setters: UN agencies, particularly organisations like the International Telecommunication Union (ITU) or UNESCO, can help provide global standards and guidelines that harmonise efforts across different jurisdictions. They can also facilitate the exchange of best practices and technological advancements in the field of online safety.
4. Civil Society and Academia: Non-governmental organisations and academic institutions can contribute valuable research and insights into the effects of online content on adolescents including their mental health and well-being. They should also play a critical role in advocating for the rights and protections of young users.
6. Parents and Guardians: The role of parents and guardians cannot be understated. They must be equipped with the knowledge and tools to guide and support their children in the digital age.
7. Youth Involvement: Adolescents themselves should be involved in the process of developing online safety measures. Their insights can lead to more effective and user-friendly solutions.
The way forward:
Wrapping up, these examples from the UK, Australia, and the Nordics give us a blueprint for making the online world a safer place for young people. The UK’s rulebook of 15 standards is like a digital safety shield, focusing on keeping kids’ personal info under lock. Australia’s eSafety Commissioner acts like a tech-savvy guardian, offering quick action against online dangers and educating young people on how to stay safe. Meanwhile, the multi-stakeholder approach used in Sweden and Norway shows that when it comes to protecting young people, it’s all about teamwork, smart rules, and a shared commitment to making the digital playground as safe as it is playful.
Marwa is RNW Media’s human rights expert, working at the intersection of tech accountability, gender, and good governance. She has extensive experience in policy advocacy work, aiming to contribute to a free and fair internet.