TikTok has revealed new safety functionalities that will enable parents to manage the type of content their children will watch on the platform. The video-sharing media application stated that its new family safety mode would connect a parent’s account to their child’s.
The functionality will allow parents to control the quantity of screen time of their child on TikTok, who is able to message the account and block certain types of content that appear on the feed. The reveal came as the debate continues regarding the effect of the Internet and social media use on people, especially on children and adolescents.
Last week, the U.K. government issued its first plans with regard to regulation of Internet companies, creating its online harms White Paper published in 2019. That suggests forcing social media to respect a certain duty of care to users, with gigantic fines as potential punishments.
TikTok’s head of trust and safety in Europe, Cormac Keenan, stated that the application had collaborated with some of the platform’s most renowned individuals in order to incorporate the features. He said that they would ‘remind our community to be aware of the time they spend on TikTok and to encourage them to consider taking some time out.’
Other Platforms to Also Implement Strict Regulations
Noting in a blog post revealing the new safety functionalities, Mr. Keenan said: “When people use TikTok, we know they expect an experience that is fun, authentic, and safe. As part of our ongoing commitment to providing users with features and resources to have the best experience on TikTok, we are announcing family safety mode, a new feature to help parents and guardians keep their teens safe on TikTok.”
“We will keep introducing ways to keep our community safe so they can stay focused on what matters to them – creating, sharing, and enjoying the creativity of TikTok’s community,” he added.
On Monday, Facebook issued a set of intended guidelines for regulators on implied ‘new rules for the Internet.’ The suggestions included encouraging more independent monitoring on content moderation, and provisions to ‘protect’ freedom of speech.