Finally! An interesting Twitter file that seems to reveal sketchy government behavior

from the it took-them-enough-time department

Finally we have an interesting edition of the Twitter archives!

When the Twitter archives started, I actually expected something interesting to get out of them. Unfortunately, all of the big tech companies have not been willing to be as transparent as they could be about how their content moderation practices work. Much of the transparency we have received has been through whistleblowers leaking information (which is often misinterpreted by journalists) or through companies that partner with academics, often leading to a rather dry analysis of what’s going on, read by perhaps a dozen people. There have been moments of openness, but the messy stuff gets hidden.

So I was hoping that when Elon took over and announced his plans to be transparent about what had happened in the past, we might actually learn some dirty stuff. because there is always Some land. The big question was what form that dirt might take, and how much of it was bugs and systemic rather than one-offs. But, so far, the Twitter Archives have been worse than useless. They were presented by journalists who had neither the knowledge nor the experience to understand what they were seeing, combined with an apparent desire to present the narrative in a certain framework.

Therefore, I have written several publications. walking through the “evidence” featured, and showing how Musk the chosen reporters did not understand things and were misrepresent reality. Given that most journalists know to put the important revelations at the top, and that each new “pitch” in the Twitter archives seemed more exciting, but less interesting than the one before it, I basically didn’t expect anything of interest to come from the archives. In fact, that was a disappointment.

As Stanford’s Renee DiResta pointed out, this was a real missed opportunity. If the files had really been handed over to people who understand this field, what was important and what was the banal day-to-day work of trust and safety, the real stories could have been discussed.

The Twitter archives so far are a missed opportunity. To settle the score with Twitter’s previous leaders, the platform’s new owner is pointing to niche examples of arguable excesses and errors, possibly leading to much more mistrust in the process. And yet, there is a real need for the public to understand how platform moderation works and visibility into how the app stacks up against the policy. We can move towards genuine transparency and hopefully towards a future where people can see the same facts in a similar way.

So when Intercept’s Lee Fang released the eighth installment of the Twitter Archives, I wasn’t expecting much. After all, Fang was one of the authors of the very recent garbage interception story who totally misunderstood CISA’s role in government and (falsely) argued that the government demanded that Twitter censor the Hunter Biden laptop story. The fact that the evidence from the Twitter files totally refuted his earlier history should at least result in Fang questioning his understanding of these things.

And yet… it looks like he may have (finally) legitimately found a real History of prevarication in the Twitter archives in its most recent installment. Like everyone else, he initially posted his findings, where he admits that he was granted access to Twitter’s internal systems through a Twitter-employee attorney who would seek out and access the documents he requested, on Twitter in a messy, hard-to-find thread. continue. He then posted a more complete story in The Intercept.

The story is still somewhat messy and confused, and it’s not entirely clear that Fang realizes what he found, but it does suggest serious government misconduct. It actually combines a few other stories we’ve covered recently. First, towards the end of the summer, Twitter and Meta announced that they had found and removed a disinformation campaign running on their platforms, and all signs suggested that the campaign was being run by the US government.

As noted at the time, the propaganda campaign did not appear to be as successful. In fact, it was a bit pathetic. From the details, it dreamed of like someone in the US government had the dumb idea of ​​”hey let’s create our own social media propaganda accounts to counter foreign propaganda accounts” instead of accepting “hey we’re the government of the US, we can talk openly and transparently.” .” The overall failure of the campaign was not…surprising. And we’re glad Twitter and Meta killed the campaign (and now we’re hearing the US government is looking into how this campaign came to be in the first place).

The second recent story we had was about Meta’s “Xcheck” program, which was initially revealed in the Facebook archives as a special type of “whitelist” for high-profile accounts. Meta asked the Oversight Board to review the program, and just a few weeks ago the Oversight Board finally launched his analysis and suggestions (after a year of researching the program). Turns out it’s basically like what we said when the show was first revealed: after too many “false positives” on high-profile accounts became embarrassing (for example, President Obama’s Facebook account was later removed because he recommended the book). “Moby Dick” and there was an automatic flag on the word “dick”), someone at Facebook instituted the Xcheck program to effectively whitelist high-profile people so that a human would have to check the flags in their account before to take any action. .

As we discussed on our Xcheck podcast, in many ways Facebook was choosing to favor “false negatives” for high profile accounts over “false positives”. The end result, then, is that high-profile accounts can effectively get away with violating the rules with a longer delay in consequences, but are less likely to accidentally get suspended. compensations. The entire content moderation space is full of them.

Again, as we noted when that story first came out, basically everyone The social media platform has some form of this in action. It almost becomes necessary to deal with scaling and not accidentally ban your highest profile users. However, it comes with some serious risks and problems, which are also highlighted in the Oversight Board report. policy recommendations regarding Xcheck.

So it’s not at all surprising that Twitter clearly has a similar whitelisting feature. This was actually revealed a bit in an earlier Twitter archive when Bari Weiss, thinking he was exposing unfair treatment of the @LibsOfTikTok account, Really revealed that I was on a similar Xcheck-style whitelist that clearly displayed a flag on the account saying DO NOT TAKE ACTION ON THE USER WITHOUT CONSULTING an executive team.

That’s all the background that ultimately leads us to the story of Lee Fang. He reveals that the US government apparently placed some of his accounts on this whitelist after they were previously hacked. The accounts, at the time, were correctly labeled as being run by the US government. But here’s the nasty part: Sometime after that, the accounts changed to stop being transparent about the US government. The US was behind them, but because they were on this white list it’s likely they could get away with sketchy behavior with less scrutiny from Twitter, and it probably took longer to realize they were engaged in a state-backed propaganda campaign.

As the article points out, in 2017, someone in the US government noticed that these accounts, which, again, at the time clearly stated that they were run by the US government, were somehow limited. via Twitter:

On July 26, 2017, Nathaniel Kahler, at the time an official working with the US Central Command, also known as CENTCOM, a division of the Department of Defense, send by email a Twitter representative with the company’s public policy team, with a request to pass verification for an account and “whitelist” a list of Arabic-language accounts “that we use to amplify certain messages.”

“We have some accounts that are not indexed on hashtags, maybe they were flagged as bots,” Kahler wrote. “Some of these had created real followers and we hope to save them.” Kahler added that he was happy to provide more paperwork from his office, or SOCOM, the acronym for US Special Operations Command.

Now, it seems reasonable to question whether or not Twitter should have whitelisted them in the first place, but if they were flagged correctly and didn’t engage in infringing behavior, you can see how it happened. but twitter absolutely I should have had policies stating that if those accounts have your descriptions or names or whatever changed, the whitelist flag should be automatically removed, or at least sent for human review to make sure it was still appropriate. And that apparently didn’t happen.

As The Intercept’s report points out, Twitter at the time was under enormous pressure from virtually every corner that ISIS was an effective user of social media for recruitment and propaganda. So the company had been kind of aggressive in trying to kill that off. And it appears that US accounts got caught up in those efforts.

So there’s a lot of interesting stuff revealed here: more details about the US government’s foreign social media propaganda campaigns, and more evidence of how Twitter’s “whitelisting” program works and the fact that it doesn’t. it seems to have very good controls (not surprising, since almost no similar company tool has good controls, as we saw with the OSB review of Xcheck for Meta).

But… the twist that “Twitter helped the Pentagon in its covert online propaganda campaign” is, once again, missing the point here. Neither the Pentagon nor Twitter have a good look at this report, but in an ideal world it would lead to more openness (à la the OBS look at Xcheck) about how Twitter’s whitelisting program works, as well as more revelations about how the DOD was able to run its propaganda campaign abroad, including how it switched Twitter accounts from being public about their affiliation to hiding it.

This is where it would be helpful if a reporter who understood how this all worked was involved in the investigation and could ask Twitter questions about how big the whitelist is (for Meta it reached about 6 million users) and what the process was for. getting into it. What controls were there? Who could whitelist people? Were there ever any attempts to check those on the whitelist to see if they abused their status? All of that would be interesting to know, and as Renee DiResta’s article pointed out, it would be the kind of questions that real experts would ask if Elon gave them access to these files, rather than…whoever he keeps giving them to.

Filed As: dod, propaganda, social media, White list, xverify

Companies: twitter

Leave a Reply

Your email address will not be published. Required fields are marked *