Featured

Google Removes 35,000 YouTube Posts to ‘Safeguard’ EU Elections – HotAir

If I told you 20 years ago that a major corporation with a near monopoly on a speech platform was picking and choosing what information could be communicated, the Left would have a hissy fit. 

As well it should have. No doubt a corporation–in this case, cooperating with governments–would be suppressing information that the Establishment doesn’t want you to see. After all, the government regulates corporations, making them very sensitive to the wishes of the people with guns and accountants, right?

Well, that major corporation is Google, and they are busy 24/7/365 culling the content on their platforms to ensure that only “authoritative” things are allowed. 

Authoritative shares its root with “authorities,” “authoritarian,” and “authoritarianism.” And that is exactly what this is. Google is admitting that they are shaping the electoral information landscape in a way that satisfies the authorities. 

YouTube has (“voluntarily” or otherwise) assumed the role of a private business entity that “supports elections.”

Google’s video platform detailed in a blog post how this is supposed to play out, in this instance, in the EU.

With the European Parliament (EP) election just around the corner, YouTube set out to present “an overview of our efforts to help people across Europe and beyond find helpful and authoritative election news and information.”

The European Union has a draconian censorship law that they deploy freely to suppress any speech that they find “harmful,” such as arguments that masks don’t work, that vaccines have side effects, that COVID was made in a lab in China, or whatever the powers that be decide shouldn’t be said out loud. 

Users on Twitter receive frequent notices that Germany wants their content censored; Elon Musk notifies users when they receive a takedown notice. Often, things just disappear on other sites without warning or explanation. More often than not it is the EU or Google deciding that you shouldn’t be saying something. 

This happens a lot with medical information; YouTube famously took down a conversation between Ron DeSantis and Jay Bhattacharya on COVID-19 matters. Some schmo at Google decided that they knew more about COVID than one of the best epidemiologists in the world. 

Google is bragging that it is protecting the EU election by censoring content it deems unacceptable or harmful. I could give them a list of things I find harmful, but I bet they wouldn’t care a whit. How much anti-Western propaganda is out there? Antifa drivel? Antisemitic slanders? A-OK. 

Hell, they allow Biden to lie constantly on the platform. Hell, Karine Jean-Pierre is on there all the time!

Our policies determine what isn’t allowed on YouTube and apply to all content — regardless of language or political viewpoint. We have strict policies against hate speech, harassment, incitement to violence, and certain types of elections misinformation. For example, we remove content that misleads voters on how to vote or encourages interference in the democratic process.

Our global team of reviewers combine with machine learning technology to apply these policies at scale, 24/7. Our Intelligence Desk has also been working for months to get ahead of emerging issues and trends that could affect the EU elections, both on and off YouTube. This helps our enforcement teams to address these potential trends before they become larger issues.

Alongside removing content that violates our policies, we also track the percentage of views of violative content on YouTube before it’s removed. In Q4 2023, violative content made up 0.11%-0.12% of views on our platform, meaning that for every 10,000 views on YouTube, between 11 and 12 were of content that violated our Community Guidelines. We’re continuing to invest in this work, with AI helping to further increase the speed and accuracy of our content moderation systems.

Google AI is helping. Do you mean the same AI that doesn’t believe that White people exist or that the Holocaust happened? That AI? 

Whew! I was worried that something unreliable would be making decisions about this, like a blue-haired trans activist or a Republican. 

“Our Intelligence Desk has also been working for months to get ahead of emerging issues and trends that could affect the EU elections, both on and off YouTube,” reads the post.

In case somebody missed the point, YouTube reiterates it: “This helps our enforcement teams to address these potential trends before they become larger issues.”

It seems to me that there are certain types of information that Google could and should put out there, although taking content off a platform should be restricted to the most extreme content. 

For instance, if a bunch of videos are pushed out from a spy agency or its affiliates, it might make sense to label them and link them so people are aware that they all come from some Chinese or North Korean source masquerading as something else. 

That could be useful. Just as useful as it would have been if Twitter had released the information that it knew that Hamilton 68’s list of “Russian bots” was totally bogus, created out of whole cloth by CIA operatives. 

But taking down videos that it arbitrarily decides are misinformation is election interference. We know we can’t trust them to be anything but biased because they censored videos that distributed true but “harmful” information. 

Google, Facebook, and all the big tech firms know where their bread is buttered. They want to keep governments happy, so they willingly suppress anybody who annoys them whenever possible. 

Believe it or not, YouTube is still bragging about how good a job they have done on containing medical misinformation, pointing to the great work they did during the pandemic. 

They only want “high-quality” content on their platform, not crazy people telling the truth

Trust the authorities!

In the years since we began our efforts to make YouTube a destination for high-quality health content, we’ve learned critical lessons about developing Community Guidelines in line with local and global health authority guidance on topics that pose serious real-world risks, such as misinformation on COVID-19, vaccines, reproductive health, harmful substances, and more. We’re taking what we’ve learned so far about the most effective ways to tackle medical misinformation to simplify our approach for creators, viewers, and partners.

As medical information – and misinformation – continuously evolves, YouTube needs a policy framework that holds up in the long term, and preserves the important balance of removing egregiously harmful content while ensuring space for debate and discussion. While specific medical guidance can change over time as we learn more, our goal is to ensure that when it comes to areas of well-studied scientific consensus, YouTube is not a platform for distributing information that could harm people.

YouTube uses all sorts of “comfort language” (as they call it in legislation) to assure us that they welcome debate, but the reality is that they promote a narrative. Even as they loosened the rules around COVID, they continued to put disclaimers on all the videos pointing to the CDC. 

No group spread more information about COVID-19 than the CDC. Perhaps NIAID did, but it’s close. 

Big Tech won’t get out of the speech policing business not just because they love policing speech; they can’t and won’t because it pleases their government masters to have the power by proxy to silence people. Every time Big Tech annoys a politician you hear mumblings about antitrust work; it suddenly quiets when the oligarchs comply. 

Surprising, isn’t it?



Source link