Anonymous ID: 04c659 April 8, 2023, 12:34 a.m. No.18659489   🗄️.is 🔗kun

https://youtu.be/7uAUoz7jimg

 

03:34:30

 

Waiting for the break of day

Searching for something to say

Flashing lights against the sky

Giving up, I close my eyes

Anonymous ID: 04c659 April 8, 2023, 3:02 a.m. No.18659755   🗄️.is 🔗kun

https://www.forbes.com/sites/worldeconomicforum/2021/07/01/why-we-need-a-global-framework-to-regulate-harm-online/

 

Why We Need A Global Framework To Regulate Harm Online

forbes.com/sites/worldeconomicforum/2021/07/01/why-we-need-a-global-framework-to-regulate-harm-online

July 1, 2021

World Economic Forum

Contributor

Jul 1, 2021,08:00am EDT

 

By Cathy Li, Head of Media, Entertainment and Sport Industries, World Economic Forum and Farah Lalani, Community Curator, Media, Entertainment and Information Industries, World Economic Forum

 

The pandemic highlighted the importance of online safety, as many aspects of our lives, including work, education, and entertainment became fully virtual. With more than 4.7 billion internet users globally, decisions about what content people should be able to create, see, and share online had (and continues to have) significant implications for people across the world. A new report by the World Economic Forum, Advancing Digital Safety: A Framework to Align Global Action, explores the fundamental issues that needs to be addressed:

 

While many parts of the world are now moving along a recovery path out of the Covid-19 pandemic, some major barriers remain to emerge from this crisis with safer societies online and offline. By analyzing the following three urgent areas of harm we can start to better understand the interaction between goals of privacy, free expression, innovation, profitability, responsibility, and safety.

 

Health misinformation

 

One main challenge to online safety is the proliferation of health misinformation, particularly when it comes to vaccines. Research has shown that a small number of influential people are responsible for the bulk of anti-vaccination content on social platforms. This content seems to be reaching a wide audience. For example, research by King’s College London has found that one in three people in the UK (34%) say they’ve seen or heard messages discouraging the public from getting a coronavirus vaccine. The real-world impact of this is now becoming clearer.

 

Research has also shown that exposure to misinformation was associated with a decline in intent to be vaccinated. In fact, scientific-sounding misinformation is more strongly associated with declines in vaccination intent. A recent study by The Economic and Social Research Institute's (ESRI) Behavioral Research Unit, found people who are less likely to follow news coverage about Covid-19 are more likely to be vaccine hesitant. Given these findings, it is clear that the media ecosystem has a large role to play in both tackling misinformation and reaching audiences to increase knowledge about the vaccine.

 

This highlights one of the core challenges for many digital platforms: how far should they go in moderating content on their sites, including anti-vaccination narratives? While private companies have the right to moderate content on their platforms according to their own terms and policies, there is an ongoing tension between too little and too much content being actioned by platforms that operate globally.

 

–1/3/x

Anonymous ID: 04c659 April 8, 2023, 3:04 a.m. No.18659760   🗄️.is 🔗kun

The solution to online abuse? AI plus human intelligence

weforum.org/agenda/2022/08/online-abuse-artificial-intelligence-human-input

Opinion

 

CybersecurityAug 10, 2022

With AI and human intelligence, scaled detection of online abuse can reach near-perfect precision.

 

Readers: Please be aware that this article has been shared on websites that routinely misrepresent content and spread misinformation. We ask you to note the following:

 

1) The content of this article is the opinion of the author, not the World Economic Forum.

2) Please read the piece for yourself. The Forum is committed to publishing a wide array of voices and misrepresenting content only diminishes open conversations.

 

With 63% of the world’s population online, the internet is a mirror of society: it speaks all languages, contains every opinion and hosts a wide range of (sometimes unsavoury) individuals.

 

As the internet has evolved, so has the dark world of online harms. Trust and safety teams (the teams typically found within online platforms responsible for removing abusive content and enforcing platform policies) are challenged by an ever-growing list of abuses, such as child abuse, extremism, disinformation, hate speech and fraud; and increasingly advanced actors misusing platforms in unique ways.

 

The solution, however, is not as simple as hiring another roomful of content moderators or building yet another block list. Without a profound familiarity with different types of abuse, an understanding of hate group verbiage, fluency in terrorist languages and nuanced comprehension of disinformation campaigns, trust and safety teams can only scratch the surface.

 

A more sophisticated approach is required. By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision.

 

Online abuses are becoming more complex

 

1/x/x

Anonymous ID: 04c659 April 8, 2023, 3:05 a.m. No.18659763   🗄️.is 🔗kun   >>9799

Why we need a global framework to regulate harm online

weforum.org/agenda/2021/06/global-framework-regulate-harmful-online-content

Future of Media, Entertainment and SportJun 29, 2021

This article is published in collaboration with Forbes.

One in three children are exposed to sexual content online.

 

Digital platforms used by billions of people around the world are being misused to cause harm and endanger people.

 

Urgent areas of concern, including child exploitation, highlight fundamental deficiencies in the current digital media ecosystem.

 

A new report by the World Economic Forum examines these issues and how we can address them to improve online safety.

The pandemic highlighted the importance of online safety, as many aspects of our lives, including work, education, and entertainment became fully virtual. With more than 4.7 billion internet users globally, decisions about what content people should be able to create, see, and share online had (and continues to have) significant implications for people across the world. A new report by the World Economic Forum, Advancing Digital Safety: A Framework to Align Global Action, explores the fundamental issues that needs to be addressed:

 

How should the safety of digital platforms be assessed?

 

What is the responsibility of the private and public sectors in governing safety online?

 

How can industry‑wide progress be measured?

 

While many parts of the world are now moving along a recovery path out of the COVID-19 pandemic, some major barriers remain to emerge from this crisis with safer societies online and offline. By analysing the following three urgent areas of harm we can start to better understand the interaction between goals of privacy, free expression, innovation, profitability, responsibility, and safety.

 

Health misinformation

 

One main challenge to online safety is the proliferation of health misinformation, particularly when it comes to vaccines. Research has shown that a small number of influential people are responsible for the bulk of anti-vaccination content on social platforms. This content seems to be reaching a wide audience. For example, research by King’s College London has found that one in three people in the UK (34%) say they’ve seen or heard messages discouraging the public from getting a coronavirus vaccine. The real-world impact of this is now becoming clearer.

 

Research has also shown that exposure to misinformation was associated with a decline in intent to be vaccinated. In fact, scientific-sounding misinformation is more strongly associated with declines in vaccination intent. A recent study by The Economic and Social Research Institute's (ESRI) Behavioural Research Unit, found people who are less likely to follow news coverage about COVID-19 are more likely to be vaccine hesitant. Given these findings, it is clear that the media ecosystem has a large role to play in both tackling misinformation and reaching audiences to increase knowledge about the vaccine.

 

This highlights one of the core challenges for many digital platforms: how far should they go in moderating content on their sites, including anti-vaccination narratives? While private companies have the right to moderate content on their platforms according to their own terms and policies, there is an ongoing tension between too little and too much content being actioned by platforms that operate globally.