In a recent piece, the World Economic Forum demanded censorship to protect people from the “dark world of internet dangers.”
The World Economic Forum (WEF) keeps pounding the table about how “AI”—artificial intelligence—and humans must be combined in some way as a claimed solution to pretty just about any ills facing the economy and society.
It was never certain if this Davos-based financial elite mouthpiece develops its absurd “solution” and “propositions” in order to support or create new storylines, or simply in order to appear active and receive funding from its backers.
But here we are, with the World economic forum focusing on what appears to be the main concern in everyone’s lives at the moment.
No, it isn’t the spiraling inflation, oil prices, or even widespread food shortages. Considering how committed to globalization the WEF is, it is oddly tone-deaf to what is actually taking place worldwide.
And the WEF carelessly discusses “the dark realm of internet dangers” as individuals are struggling to survive and fear the impending winter.
The organization appears to be working diligently to end “cyberbullying,” which is a problem that is loosely and broadly defined, by squaring the loop of combatting internet trolls. It apparently addresses “child abuse, deception, radicalism, and deception,” which is precisely what you’d expect it to be.
The main message conveyed by the article published on the organization’s website, however, is that both human censors nor censoring brought on by “AI” (really just machine learning algorithms) is sufficient any longer. This is true when one viewer cuts thru the weed growth of spoken or narrative smoke and mirrors.
Scaled identification of cyberbullying “may reach near-perfect accuracy by notably integrating the strength of technological advances, off-platform intelligence gathering, and the ability of subject-matter experts who know ways bad actors operate,” according to the WEF.
But what does this even mean?
The World economic forum starts saying it near the end, but spoiler alert: it still doesn’t make sense: rather than depending on something that is repeatedly and incorrectly regarded as “AI” all through the article, the WEF says it is suggesting “a framework: instead of relying on Machine learning to identify at scale and living beings to analyze cases, an intelligence-based framework is extremely important.”
This will enable “trust and safety groups to counter emerging online risks before they hit individuals,” as stated by the WEF.
Ultimately, the argument presented here may be understood as simply placing pressure on social media networks to begin moving towards “preemptive censorship” after it has been transformed to a “human readable” format.
And what a compelling argument this is if that’s the case.
I am NOT funded by Bill & Melinda Gates, or any other NGO or government. So a few coins in our jar to help keep going are always appreciated.