In a time when social media and technology permeate every aspect of our lives, safeguarding the future generations of our society has taken precedence.?
Authorities in the UK have released a new set of guidelines intended to protect young children on social media platforms after realising the need for increased safety precautions.?
The new law gives the regulator, Ofcom, additional enforcement tools and places the burden of protecting minors from some lawful but dangerous information on businesses.
It brings in additional regulations, like making pornographic websites verify users' ages before allowing them to see content.
Also, platforms must demonstrate their dedication to eliminating unlawful content by displaying the following:
Other new offences include posting "deepfake" pornography, which uses artificial intelligence (AI) to incorporate someone's likeness into pornographic content, and cyber-flashing, which is the act of sending unwanted sexual imagery online.
The act also contains provisions to facilitate the process by which grieving parents can get data about their deceased children from digital companies.
In an effort to better safeguard kids online, new?guidelines?have been released.?These include restricting direct messaging and removing kids from suggested buddy lists.?
These are included in the first draft of rules of practice that Ofcom has created in accordance with the recently enacted Online Safety Act.
A portion of Ofcom's initial draft norms of conduct are typed below the Online Safety Act, which was just passed into law.?It focuses on illegal online content related to fraud, baby sexual abuse, and grooming content.
Platforms will be mandated by law to keep children's location information private and to restrict who is able to send them direct messages.?In the coming months, Ofcom plans to release further rules regarding online safety and the marketing of content related to suicide and self-harm.?
All new codes will need to be approved by Parliament before they can be implemented.
It is hoped that by the end of next year, the codes that were announced today will be enforced.?The code also pushes larger?platforms?to employ hash-matching algorithms to identify illicit images of abuse and tools to identify websites that carry such content.
According to Ofcom, providers should deactivate accounts owned by prohibited organisations and remove posts connected to financial data theft using automated detection systems.
Ofcom advised tech companies to designate a responsible person who reports to top management for adherence to the code.
What do you think about it? Do let us know in the comments.
For more trending stories, follow us on Telegram.