wisprabbit

puzzle + interactive fiction bnuuy

hello! i make logic puzzles and interactive fiction games. i'm good and nice


twitter (not used much anymore)
twitter.com/wisprabbit
crosshare (crossword blog, still active-ish)
crosshare.org/wisprabbit
puzz.link (logic puzzles, defunct because those bitches at twitter ate the api)
puzz.link/db/?via=wisprabbit

cofruitrigus
@cofruitrigus

The BBC has published an article researching abuse directed at UK members of parliament (MPs). They have also published more information in this Google doc. It looked at tweets from a six week period using Perspective, a tool from Google which uses machine learning to identify abusive and toxic comments.

Perspective alone probably should not be used to determine whether a comment is toxic, without being reviewed. On their website, they list three ideas for using their technology:

  • To highlight comments for review by moderators.
  • To give feedback to commentors that what they're saying isn't very nice, to make them reconsider saying it.
  • To be used by social media users to hide potentially offensive comments.

This methodology has been criticised by multiple different people for various reasons. Since Perspective allows you to check comments against it, it can be seen that it seems to rate most comments that use any swear words as toxic, regardless of context. In their own methodology section the authors also mention that this analysis does not discriminate between toxic tweets directed at an MP, and ones that mention them but are directed at someone else. The commentator Ash Sarkar, of left-wing media organisation Novara Media, has pointed out that the tool doesn't flag common racial slurs used in the UK, but using the word "Tory", which is very commonly used to refer to Conservative party politicians, even by respected journalists, does lead to a comment being found to be toxic. This makes it very odd that the authors conclude that ethnic minority MPs were not more likely to receive toxic tweets.

My own thoughts It says that former prime minister Boris Johnson alone received over 18000 toxic tweets, but he definitely didn't see most, if any, of them, and can we really say he didn't deserve it? In addition, calling someone a "liar" or a "hypocrite" isn't very nice, but it is often true.



direlog
@direlog

Plot[edit]

In November 2011, the Divine Invasion begins, with the Host converting Earth's human biomass into the Devil's Watchmaker. Retreating to the safety of Australia, LIBRA activates their contingency plan to destroy civilization and reassert the godless forces of nature, cutting off the Host from the substrate of human perception.

Led by the digitized consciousness of the late Brazilian racing driver Ayrton Senna, the player is gently coached to slot together thousands of realistically-modelled buildings[4] until anthropic object-orientation is weakened to the point where the Host can no longer survive.



ChaiaEran
@ChaiaEran

So, I've been thinking about this for a few days now, ever since the really big influx of Twitter migrants started, but the reification of Cohost as a guaranteed safe space is one that makes me a little uneasy? It's good that we're calling out toxic behaviours and attempting to refrain from them, but Cohost isn't inherently safer than any other social media site. Preserving the existing relaxed culture is a good thing that I've pushed for, but we need to keep in mind that it's not because it was here first, (if the culture on Cohost were aggressive and petty before the Twitter users came, I'd be welcoming attempts to change the culture of the site,) it's because it's healthier and more compassionate, thanks to a directed effort to make it so. This kind of safety and kindness is something that requires constant effort; acting in good faith is difficult, while acting in bad faith is easy.

It's certainly easier to act in good faith on Cohost than on Twitter, thanks to design differences and a lack of an algorithm, but I'm still a little concerned with the idea of lionizing the website as inherently good-faith. We should remain critical (as in critical thinking, not as in criticism) of every space we enter, both on- and offline. Good faith action and safety aren't just always giving the benefit of the doubt, it also involves being willing to ask pointed questions when called for. I trust @staff, because they've done a pretty good job so far, and so I'm willing, when needed, to go to bat for them against bad-faith action. But that trust is predicated on their actions; it's earned, not owed.

This turned into a bit of a ramble, but I hope I've gotten my point across? Safe spaces are not inherently so, and we need to work to keep them so.


daboross
@daboross

i think this is important

the two things i would push for in a "culture" here if there is one, given what i've seen, are:

  • intentional actions to improve the space
  • avoiding toxic positivity - don't just be happy and positive at all costs

i think i've reposted at least one post along the lines of the latter, and this touches on the former nicely