Dot Social

The Fediverse’s Trust and Safety Warriors, with Samantha Lai and Jaz-Michael King

Episode Summary

The fediverse represents an opportunity to rethink how trust and safety works in social media. But how, and what should we still be worried about? Two experts — the Executive Director of IFTAS and a co-author of a seminal paper — explain.

Episode Notes

The fediverse offers an opportunity to rethink how trust and safety works in social media. In a decentralized environment, creating safe and welcoming places relies on community moderation, transparent governance, and innovation in tooling. No longer is one company making — and enforcing — its own rules. It’s a collective responsibility.

Samantha Lai, senior research analyst at the Carnegie Endowment for International Peace, and Jaz-Michael King, the executive director of IFTAS, are here to explain how. Samantha co-authored a seminal paper, “Securing Federated Platforms: Collective Risks and Responses,” along with Twitter’s former head of trust and safety, Yoel Roth. Jaz runs IFTAS, which offers trust and safety support for volunteer content moderators, community managers, admins and more. The two often collaborate and bring perspectives from the policy and operational sides. 

Highlights of this conversation:

Mentioned in this episode:

🔎 You can find Samantha at @samlai.bsky.social and Jaz at @jaz@mastodon.iftas.org 

✚ You can connect with Mike McCue on Mastodon at @mike@flipboard.social or via his Flipboard federated account, where you can see what he’s curating on Flipboard in the fediverse, at @mike@flipboard.com

💡 To learn more about what Flipboard's doing in the fediverse, sign up here:  https://about.flipboard.com/a-new-wave/

Episode Transcription

This transcript was generated by AI, which may affect its accuracy. As such, we apologize for any errors in the transcript or confusion in the dialogue. 

The fediverse represents an opportunity to re-think how trust and safety works within social media. 

In a decentralized space, no single entity has control over the whole network. Trust and safety in the fediverse relies on community moderation, transparent governance, and innovation in tooling. 

What does this look like in practical terms? What are some trust and safety threats in the fediverse? What needs to happen next, particularly in this robust election year?

Welcome to Dot Social, the first podcast to explore the world of decentralized social media. Each episode, host Mike McCue talks to a leader in this movement; someone who sees the fediverse’s tremendous potential and understands that this could be the Internet’s next wave. 

Today, Mike’s talking to Samantha Lai, a senior research analyst at the Carnegie Endowment for International Peace, Jaz-Michael King, the executive director of IFTAS. IFTAS offers trust and safety support for volunteer content moderators, community managers, admins and more. Samantha and Jaz often collaborate, and we were thrilled to get them together for this episode. 

We hope you enjoy this conversation. 

Mike McCue: 

Samantha Lai. Jaz-Michael, great to have you on Dot Social. 

Samantha Lai:

Thanks for having us. Great to be here. 

Jaz-Michael King:

Thanks for inviting. 

Mike McCue:

You know, I thought a fun place to start, a useful place to start would be telling you a bit about a story, about a moderation event I had to deal with. Just recently, I posted something from my Flipboard about Elon Musk. That was an article about Elon, and most people had some interesting things to say about it. I posted it to my Flipboard account, which is now federated to the fediverse, but I did get one odd reply. It was a fairly challenging reply. There was nothing like hateful about the reply, but it was definitely a little concerning. And, you know, given that I also am responsible for the flipboard.com instance, I thought I should probably just investigate this, because, you know, my spidey sense was going off when I went and looked at his posts, it was very clear that this person was a some sort of, you know, neo Nazi, you know, calling for the death of politicians, you know, that kind of stuff, right? So I realized, Okay, well, this is a guy that we need to block. And, of course, I blocked him personally. I called up Greg and said, Hey, we should block this guy on our instance. But then also I looked at the instance that he was on, and it was run by another guy, and that instance administrator was also a neo Nazi, and, you know, denying the Holocaust, the whole bit, right? All of the, all of the things that Neo Nazis do, and clearly was, you know, fomenting hate speech on this instance, where there were about probably 1800 different people on this instance. So so of course, in addition to blocking this person and the administrator, we also blocked the entire instance, because it was pretty clear to us, like this was just a group of bad actors in the fediverse.

And then, of course, I also sent out a message to Eugen and to stocks and a few other large instance owners, and said, hey, just heads up. We ran into this guy. You might want to consider blocking him. And then someone also posted to me, Hey, you might want to think about posting, you know, with a hashtag, fedi block with the name of the instance. And so I so I did that. I was a little bit nervous about that, because it was like a public moderation thing, and it felt a little off. But at the same time, I was like, well, these are Nazis. We should block them. You know, this shouldn't be that hard, so let me just do this. But it definitely, you know, kind of started me, started causing me to ask some questions, like, Well, is there a better way of doing this? I've noticed that some of the folks that are posting with beta block are not. They haven't, you know, they have beefs with each other. There's, they're like, a lot of unprofessional moderators. And it seems like, you know, a pretty significant, you know, hammer to use in a situation. You know, there's a lot of different, you know, sort of, as you guys well, know, ways to deal with this. 

So, you know, for me, I thought, on the one hand, it was great. It was a decentralized, you know, kind of moderation experience where we were able to block, I don't know, 1800 Nazis from the fediverse in one fell swoop, with a matter of minutes. On the other hand, I also was like, hmm, is this really the best way to do this right now? And what if this happened with someone else, and what if it was more of a gray area, and how would we handle that? 

Samantha Lai:

So I found kind of your experience really interesting. This really illustrates a couple of shortcomings we have in terms of existing moderation approaches on the fevers right. Number one is kind of like information sharing mechanisms across people. Like you're someone who knows other people. You could directly contact other people. You could use the fedi block hashtag. You understand how to navigate this space, but given that you need a certain degree of moderation experience in order to even handle that whole situation, if you're someone new to the fediverse, for example, or if you're someone who's, you know, doing this on a volunteer basis, you really just have to learn moderation on the fly, make the best decisions you can based on your own judgment and go like I hope this is going to work. So a way in which things will probably need to improve, in some way, is just for there to be more information sharing and tooling that makes information sharing easier, so that you don't have to in order to set up new server, identify all neo Nazi accounts out there, manually, block all of them in order to have a server that's not filled with toxic or problematic content. 

Jaz-Michael King:

It's fascinating to hear your personal experiences as both your role with Flipboard and your role with the particular implementation of I presume, Mastodon was the platform in use there. And part of our work at IFTAS, we canvassed the two big canvases last year of moderators. And one of the questions we asked was, how long have you been a moderator? And I think roughly between 20, 25% less than a year. And you know, that's that's a big thing to think about, but that said, we are enjoying a wonderful curve of adoption and usage, and in that, I think your actions highlight something I saw was a poly poly, poly centralization, polycentric, where is decentralized all the way down to the individual. Is it 8 billion people making decisions, or is it community leaders like yourself who set out an instance, a service on the fediverse and therein, adopted your ability to pick up those handlers? And what I'd say to that is, the sooner, the faster, the better we can equip people with ways to not just communicate this particular account, this particular server, bad, good, indifferent, but what are the community norms? How do they impact you? How do they align with you? Culturally, geographically, religiously, whatever your background and whatever your community is doing, how do we permeate the fediverse with those norms so that there's a shared vocabulary, a shared language, a shared understanding of what it is to de-federate. And for that, I hope, and I heard Samantha say, you know, going with your gut feeling/ I'm a huge proponent of, in this particular instance, a federation policy. You know, just write down what it is that it takes to federate or de federate with someone, so you have something to point to. And if we can sort of start sharing those more than individual domains, we can start, you know, adding to that decentralization of governance and democracy, that so that people are making decisions more informed with a group, without a group, whatever their community is doing. 

Samantha Lai:

I think another interesting thing that I was thinking about from Mike's, what Mike told us about was that, you know, yours was a pretty clear cut case of, well, that's that person's a neo Nazi. They should probably, you probably don't want to federate with them. And in the context of trust and safety, there's kind of, there are a couple of cases that are pretty clear cut right, like child sexual abuse, materials, spam bots, Neo Nazis like deeply problematic behavior that most people would agree they probably wouldn't want. But then questions of defederation or blocking get a lot more gray when it comes to, you know, interpersonal conflict or disagreements over certain issues that result in someone like getting blocked or getting defederated from and that's kind of a shortcoming of the use of fedi block as it is, because You don't really have a good understanding of why someone decided to block someone, and that's there's a question of like, is there some way to build up a system so that people could opt into other people's selections based on certain criteria? Is there a way for people to tag the reasons why they decided to de federate or block certain accounts, and could that make it an easier, lower cost, lower labor, less labor intensive way for people to choose how they want to moderate their own communities? 

Mike McCue:

Yeah, Samantha, that's a really interesting point, because, you know, let's just say, for example, that the administrator wasn't clearly a neo Nazi. Early, right? But there were some other people on the instance who were but then, you know, you can't look at every single person, and so you don't know, there might be some really good people who are on that instance, and then some problematic actors who are on that instance. So you wouldn't want to just block the whole instance out of the gate in that case, right? And then further to that, you know, it did make me feel a little bit queasy when I posted the fedi block, like, Hey, I'm blocking. We're, you know, Flipboard is blocking these guys because they're Neo Nazis. And I was thinking, Hmm, you know, what are the Neo Nazis gonna say about that, you know? And it did seem a little bit public, right? It feels to me, like, there should probably be some confidential ways of dealing with this, right? That are, you know, perhaps, you know, I could see if that you were a moderator for one year, like, do you really want to put your name out there or your account out there? Is like, Hey, we're blocking these guys because of x, y and z. So it seems like you could probably be on the receiving end of a fair amount of, you know, problematic posts. As a result of that. 

Jaz-Michael King: 

I can say safely, I've never personally used fedi block. There's a community of instant, you know, putting that together that I don't think I adequately represent. So I don't use the tag both for its, you know, from whence it came and like you, it's a little too public. But I would also stress there are numerous communities of varying shapes and sizes, of folks who are sharing threat intelligence in all manners of means and detail.

You know, we operate one or two, I'm aware of many others, and those might be a group of like minded administrators who are seeking to form a, you know, almost like, I think of the old web rings, right? Just a consolidated group that has a shared desire for what they want to be achieving in the fediverse. Larger rooms. I've seen, rooms with hundreds and hundreds of administrators as well. 

Mike McCue:

Are these on Discord? where are these rooms I've seen? 

Jaz-Michael King:

So there's, there's many Discords. There are many matrix, matrices, not sure of the plural there I was shamelessly plug. If toss, you know, we operate a Trust and Safety moderators chat community in a relatively newly hashed web forum with various private groups for folks to share information

Mike McCue:

You just recently came out with IFTAS Connect is that, is that, right? And so that would be a place that I would go to do a more sort of kind of professional moderation job for this, rather than just posting on, you know, with a fedi block hashtag. Is that the idea? 

Jaz-Michael King:

Yeah, I shy away from the term professional. I'll go with elevated. 

Mike McCue:

Elevated, okay, a bit more private. 

Jaz-Michael King:

You know, definitely we have, you know, we have very so first of all, it's a, it's a gated community, it's, it's where we're vets and who's coming in. So we're looking for community moderators, managers, trust and safety professionals.

Mike McCue: 

I would have had to apply to get in. 

Jaz-Michael King:

Yeah, yeah. Which is why we have our we have a public library of policy guidance, legal and regulatory guidance, all the bits and bobs that you can imagine you might, you might be able to take away and read, but to get into the group, we are absolutely vetting. We need a safe space for people to discuss very sensitive issues, and then even within that community, we then have some deeper layers of security. So we have an Information Sharing and Analysis Center that we just formed, which I think probably about a dozen, dozen and a half folks right now and again, that's even more vetted as to how you're going to approach the information, how you're going to be cautious about how you say it and when you say it and where you say it, that sort of thing, and various subgroups within the community where folks can coalesce around a particular topic or a particular platform, so that that's one way of doing that for sure. 

Mike McCue: 

Yeah. And so for people who are listening and who want to check this out, where, what? Where should they go on the web to see this? 

Jaz-Michael King:

Oh, connect.IFTAS.org. Plenty of resources there without even signing up. And we've even had folks who are very much in the space and not sure they want to sign up because they don't want to be seen to be saying a certain thing or making a certain decision. So we have some privacy options for how you get in. You can contact me directly if you need a more private account sort of thing. But yeah, please dive in, connect.fs.org and apply for that account today. 

Mike McCue: 

And Samantha you, you just recently posted a paper with you all. Roth or co authored a paper that was, I thought, you know, pretty much a definitive, you know, piece around how moderation and decentralized networks could work, should work, what, where the strengths are, where the weaknesses are. I thought it was a phenomenal paper. When you look at all the world of decentralized moderation, do you feel like it has the potential to be a fundamentally better, stronger, moderated model than, you know, these walled gardens. Or do you feel like, actually, you know, it's hard to tell which you know, world is going to be better. 

Samantha Lai:

Decentralization is so incredibly interesting, because it gives people the freedom to create their own forms of governance and their own rules and their own communities. I think the rise of decentralized media is a sign of kind of groups of people who want to move beyond a centralized model where moderation is conducted by private companies in ways that might fall short of different community needs, and that decentralization offers kind of just an opportunity for exploring very, very different kinds of governance. What we see and what we go through in the paper is how in the existing spaces we see it right now, there are possibilities, but there is also a lack of tooling, a lack of coordination devices that can help even the most well intentioned moderators handle complex threats, like take for example, how oftentimes, a lot of going back to again the story that you started off with us off with that like a lot of decisions of defederation or banning or blocking counts have to be done manually. It's very much dependent on the admin or moderator to make a lot of these decisions. It's very labor intensive. A lot of these decisions are really difficult to make. Like what does it mean to make that process easier right now, moderators, at least on Mastodon, can be pretty easily overwhelmed by CSAM, by spam bots, by a lot of coordinated inauthentic behavior, because they don't have the resources to longitudinally track threats or to easily remove like duplicative content through moderate through automated means. Take, for example, if a bunch of spam accounts are posting like the same hashtag or like, the same URL across a bunch of services, moderators would have to remove that one by one on their own. Like that is not a viable model. That takes a lot of time. And if there's a big enough like attack, it's very easy for, like, a small group to get overwhelmed. So what does it mean to build up the tooling, to build up the communication resources required for people to be well equipped to address those threats? That's kind of the biggest question that our paper is asking, and we provide a couple of potential next steps of what next needs to be done for us to build up trust and safety against collective security risks on Federated Services.

Mike McCue:

So Jaz, obviously, these are the kinds of things you're focused on, on enabling to happen right? In addition to these communities of people that can get together and talk about these moderation issues from across a range of instances, you also, I know, are doing mentorship programs. You have documentation. You also, I think, have shared lists of bad actors or bad instances that instance owners can can decide to utilize. Is that, is that right? 

Jaz-Michael King:

Yeah, we actually have a two part program that merges our we have two lists. We have a do not interact list, which is human curated and reviewed. And then we have something we call the caveat list, which is literally we observe roughly half of the fediverse or half of Mastodon, I'm sorry, and we observe what they block, and then we merge that we look for consensus in what the largest portion of the fediverse is blocking, and we amalgamate that into a list. So we track agreement across a gradation, 51-66-80% agreement. And then we tie that to a tool that we built called Fedi Check, which is master not only for now we can speak a little bit about where we are with tooling in the fediverse and why it's so hard. So using the mastodon API, you can approach the Fedi Check app, review the list, determine if that list meets your needs or not. We have no labeling yet. This is version 1.0, version two will have labeling by home. But for now, it's literally a small selection of a few dozen service similar to the one it sounds like you encountered in your opening problem. And then, as I said, you can then sort of pick and mix from an observation model of what your peers are doing. And I think that we took that as sort of a first step, as what would that look like, and what would the pain be in the tooling? And it was a lot of pain to build, but that's that's the sort of the shared list approach. I will say it's not a shared list. We don't share the list. One of the biggest issues that I feel is tough to surmount right now is it's easy to place a block programmatically, very hard to remove that block programmatically, so the list is only available through Fedi Check, because that way we can ensure that if something falls off the list. We had a number of when the spam wave hit some time ago, many of those servers started getting blocked by many of our observed servers. Therefore, we passed those silences, recommendations on when they retracted those we then retracted them from Fedi Check users, and so we don't the list is not easily consumed outside of that addition and retraction application. But yeah, that's that's definitely a piece of the tooling that we've built over the past few months. And we're also very close to a polycentric CSAM detection and reporting module. I say polycentric. Because I really don't want to be the only person doing this, and we already have two other options lined up for folks who don't like this particular option. But it's very true, as Samantha and Yoel and their papers and all the contributors have made very clear, tooling is required. It's not happening at the protocol level. It's not happening at the platform level. So we're pitching in and moving the ball a little bit up the field so that we can start seeing the weaknesses in the APIs, weaknesses in the platforms, and start that conversation about moving those things forward, so that we can either get to a plugin architecture, or some, some way of, you know, finding ways to extend that tooling to additional platforms. 

Mike McCue:

So it sounds like what you're doing is kind of, there's no one person who's the arbiter of the list. It's more of like you're looking at programmatically all the different instances and what they've blocked, and then, or, you know, you know, silenced, and then you go in and sort of pull that together algorithmically. So that's not, that's kind of interesting, because there's no one owner of the list at that point. 

Jaz-Michael King:

There's, there's a small number of IFTAS ban hammer. We picked up the hammer like you did, and said no, and those are illegal content or egregious inauthentic, potential persistent threat types services, I want to say that's around about 30, 35, services, not very large. 

Mike McCue.

Oh, really. Only 35 servers?

Jaz-Michael King:

On the do not interact that we specifically inject into that list, and then the bulk of that list is, as I said, it's like you said, it's derived from an observation of a very large portion of the mastodon feta versus and Only those servers that have a large active user base, because we feel that, as I said, step one, baby steps, that's a pretty palatable list to consume, especially this. This particular service was built specifically to service brand new administrators who need a day one list of lists. And I think even interesting statistic, you know, a lot of folks are worried about governance of a list. How do you get on the list? How do you get off the list? Who's in charge of the list? And we have a very long policy document that we workshopped for three months, and then we trialed the policy for another 60 days to see what it did before even putting this into the world. And of those servers that we observe, I think in total, they have a block or a limit of some kind, some recommendation on I want to say it's about 1600-1700 servers. In total, they all agree on one. There is only one that they all 100% agree on. And I think that's a fascinating insight into Decentralized Governance and what it looks like.

And I should stress for people listening, if you're one of our observed servers, you cannot be on the list and thereby consume the list and self perpetuate the list. But yeah, one out of the, as I said, 16, 1700 domains is in 100% agreement as a Do Not so we, we have a small injection of we've just, we've said, no. Up and that that goes into the list, regardless of the observation. And then the bulk of the list is that observed list. And even there, I think it's about 113 140 domains in total.

Mike McCue:

Well, I'm sure everybody's dying to know what that one is, but I want to give them any airtime either. So Samantha, you're, you know, one of the things that you talked about, you know, is this concept of, it's not just decentralized moderation, it's decentralized moderation rules and policies, right? So, which I think is actually one of the things that I find so awesome about the fediverse, because, you know, not everybody thinks about content the same way culturally. You know, you might be a researcher, and so you're, you want to have a broader lens. You might be, you know, might be something where your kids are going to be on it. So you're going to have, you're going to be, want to be more careful there. You know, there's all sorts of different reasons why you would want to have an instance. Have completely different set of moderation policies than another instance. And as a user, what I thought, also think, is phenomenally powerful, is that I can go to an instance that maps to the kind of policies that I, you know, really value. It's liberating to be able to say, hey, we know what our policies are. This is definitely not our policy, and we're just going to make this decision, not worry that we're, like, making this decision for the entire fediverse, right? It's for our, you know, instance owners, or for our instance participants, right? That I think is a really just wonderful aspect of the fediverse. It does mean, though, that it's just more complex. 

Samantha Lai:

Yeah, my first instinct was to be like, okay, yes, but complicating that, um, it's, it's also kind of difficult for everyone involved right now, right? Because if you see, for example, an instance that is filled with just really bad content. There's still the question of, oh, did this happen because of the moderator's intent, or did this happen because there weren't enough moderators to stop a bunch of, say, malevolent actors from overwhelming the server right now, and Jaz can speak a lot more to it. I'm sure moderators have to kind of do the best with what they can on an ad hoc basis. So yeah, you can develop your own rules. You can try to enforce on your own rules. But there can definitely be more support for moderators across legal, tooling, all kinds of different fronts to make their work a lot easier. On the other hand, as of right now, like there's also pretty limited information right on how users choose between servers. You make your judgment based on what you see exists, but or maybe based on, like, the recommendation of someone else who's, like, in a given community. But there isn't really a lot of information out there comparing, oh, this is, like, this server's moderation policy on paper, and this is how it happens in actuality. And this is like the number of accounts, like a given server is taken down compared to another server. So if we're talking about transparency and user choice, there's definitely more to be done there. Of course, that's also complicated, because if you ask for a super small server to be like, really transparent about all the decisions they make, they could face flack from their users or from other community members. So all that to say, yeah, things are a little bit complicated. There still 

Jaz-Michael King:

Something that similarly brings me a lot of joy when I'm looking at the fediverse, I curate a map. So I'm from, I'm from Wales, Cymru in the United Kingdom. And I say Cymru because Wales is actually the English name for the country, and it means foreigners. And the reason I got into the fevers was to build a Welsh, English bilingual social media opportunity for folks to provide digital citizenry for folks who want to interconnect over the internet in the language of Cymru, of the large language. And so I have a special place in my heart for preserving language. And so just as a hobby, I maintain a server of a map of servers that offer either a particular geography or a language service. And when you look at that and you see the server in Madagascar, in Iceland, in Singapore, in Costa Rica, in Boston, and every one of those was put together by someone who gave a hoot about that for their language, for their culture, for their and I just find that so visually rewarding to see writ in reality, right? That folks, the Tunisia server, like, who knows what roles they're running, right? But they're doing it for the Tunisians. And I think that it's just so incredibly empowering and just a wonderful gift that we've sort of brought back to the web. And certainly, I think there's room to convene those folks in ways where they they can share up to the point that they agree and then happily disagree about some other things, and grow the movement that way. But it's just, it's just so wonderful to your point to see that governance literally, physically and linguistically decentralized across the planet, It’s fantastic. 

Mike McCue: 

Yeah, the local perspective on this, I think, is another element of moderation that is so hard to scale. If you're in a central if you're in a walled garden, right? You've got to have people who know and understand moderation. You know, all these different countries and all these different languages and all these different, you know, dialects and local kind of customs and so on, right? And that's basically impossible to scale to. I mean, it's just, it's incredibly hard to do. But with all these instances that are locally run by local administrators, and they have a set of, you know, policies and rules that are probably fairly in common with a lot of others. But then there are specific local things that are going on. Maybe there's a local election. Maybe there's something locally happening.

You know, that, for example, the SFBA guys were telling me how, as the Waymo cars are in San Francisco, a lot of normal, you know, usually polite people get very angry, and can be, can say some truly problematic things, you know, because of these Waymo guy guys, and so, you know, they'll get put in time out, right? And for, you know, a little bit of time. And I, you know, it was interesting, because that was, like, a very specific local dynamic, right, that they were kind of on the lookout for. And so I think that's another really powerful dimension of decentralized moderation, is you got that local awareness and local experience. And, you know, we haven't really talked a little bit much about, like, you know, other kinds of tools that potentially could help here. For example, in Flipboard, we have a an LLM specifically dedicated to toxic comments and we're actually, you know, training that LLM to go and figure out, you know, and flag for our moderators. Hey, this is a, this is a very problematic, you know, comment, and that helps us just, you know, more quickly, stay on top of, you know, challenging, you know, challenging situations with with, you know, with toxic comments. Have you been, have you guys been looking at these kinds of, sort of AI based tools that could allow you to, sort of as a moderator, be able to scale, to be able to look at a larger number of posts or a larger number of users, or find other similar posts or similar users? 

Jaz-Michael King:

IFTAS, when we came to the market, did not show up with all of the answers or all of the solutions to any of these problems, but asked people of these things that we think you might want, what is it that we can, you know, go out, source funding and provide for you, and it was very interesting to see that near the top of those lists was hate speech detection, toxic comment detection, with a very low desire for AI.

But we have seen plenty of folks who are interested in at least being flagged that something needs to be reviewed. Very few people that we've run into are interested in in an LLM or an AI, or any kind of machine learning making decisions. For them, they built a community. They have their their moderators, and it's a pretty nice ratio, spread large across fediverse of moderators to members. So we've been looking at it. There are, oh so many, as I'm sure you're aware from the work you're doing training bugaboos with what is and what is it toxic in what context, and if it's a recounting of a lived experience versus, you know, actual toxicity. So we are looking cautiously at tooling that you could tap into to at least open up avenues to get access to tooling or APIs that you otherwise wouldn't technologically or financially be able to access. 

Samantha Lai:

Commercial platforms typically have automated enforcement as part of their chest and safety processes, because when you scale up a service, to some extent, it no might no longer become feasible for a person or a group of moderators to go through everything it has posted. So kind of building on Jaz’s answer, like there are some kinds of automation that could be useful, such as typical automation for like, CSAM detection automation for like detecting like, the same kind of URL that's been posted, the same kind of hashtags been posted, and like removing that all in one fell swoop. 

But when it comes to LLM coding of like hate speech or like problematic content, as we've seen in the use of that on like commercial or centralized social media platforms, a lot of that can be subject to biases, and they might not be sensitive to differences in language in context. So when it comes to deploying that kind of tooling, there also are various considerations in terms of how suited is it for a community? For example, something that is used to detect like nudity wouldn't be appropriate for like breast cancer awareness. 

Mike McCue:

Right. Right. Yeah, I think that's a very, very good point. And you know, back to your point, Jaz, the you know, how you use these tools is incredibly important, right? If they, if they're just making decisions, that's obviously that could be very problematic, but as a way to, kind of as an early warning detector, so that moderators can get on top of, you know, a problem that seems quite valuable, which is how we use it at Flipboard.

What came to mind for me, as we've been on this journey of like thinking through how to do use LLMs for this type of thing, is that if there was a way where you could have kind of a transparent model where, hey, this is an LLM, here's the data that it's trained on, everyone be aware of the data. Here's what we're training it on. And then, you know, then everyone knows what this particular classifier is going to you know how it's going to score content that it sees. And then, of course, you could decide, you know whether you want to, you know how much you're going to trust the readout from this tool or not. But I think that that concept of like a shared training model, right for that early detection is something that could be really interesting, and certainly something that I've been thinking a lot about as you know, Samantha, you mentioned some of these other companies that have had been operating larger networks for a long time at scale, they have to have some of these automation tools, because there's just too many people, too many things going on, you know, just to do it all manually. So one of the things I'm hopeful for is that those companies and Flipboard can ultimately get into a mode where we could actually start to contribute some of these tools to the greater community, right? And then kind of collaborate on the training of the tools, or the development of the tools, do it in an open source model, so that, you know, we could all benefit from them, right? And ultimately, have, like, a whole suite of things that, you know, moderators could decide to utilize. 

Samantha Lai:

Yeah, because a massive existing gap we see right now with tooling is that a lot of it, in order to have tooling, you need funding and a lot of decentralized social media platforms and a lot of servers run either based on community contribution or just the moderate at the moderator's own expense. So there are many things that need to be done in terms of creating a funding cycle, or finding ways to like fund tooling efforts to make this just, just to make sure that there's more tooling out there.

Jaz-Michael King:

Yeah, I'll throw in. So IFTAS has three main goals, one is community, one is is safety support, with the regulatory and compliance guidance and that sort of thing. And the third piece is, what can we buy for you that you can't right either you don't want to execute a contract, or you're in the wrong country, or you just don't have that money, malware, for example, or bad URLs there. Some of these problems are, I won't say, solved, but very readily remedied to a large degree simply by moving dollars from IFTAS to API provider, and saying, we would, we would like the list of bad URLs now, please.

The question is getting that intelligence into the fediverse implementation, whether it's Mastodon or something else, and Mastodon by far the most mature in terms of its API, and most mature in the amount of people who've put effort into the trust and safety features of it. And still, it is incredibly hard to work with as a third party developer, as a third party trusted flagger, let's say so in terms of the funding, one answer, I think, is IFTAS and others pitching in on sort of the fediverse version of composable moderation as Bluesky terms it. Whereas instead of it being codified at the protocol level, it's codified at the people level. And people can, as you said, they could take a look at that model that one meets my needs. I will take that please. Thank you very much. But we I'll say, over the past year, we've had money that we would have loved to have spent on IP reputation, on URL reputation, spam detection, and we simply don't yet have a way to move that information into the platforms, so that, I think, you know, the two tools that we're working on, one is over the API, the second is over web hooks. And so we're approaching that now as a way if we could get more platforms to take up standard web hooks as a way to move information between services, then maybe that's a way through, because it's really painful to have to work on one platform at a time. 

Mike McCue:

Samantha, what's your take on composable moderation? Bluesky ideas there? Have you looked at that?

Samantha Lai:

I think it's really cool. I think that it's a way that allows users more flexibility in terms of the content they do and don't want to see. I think it's a really interesting kind of new model of governance that just give really is able to capture the nature of decentralization, which is that you are meant to empower users to create their own feeds and decide what content they do and do not want to see. 

Jaz-Michael King:

And you know what's really interesting about the implementation there? So IFTAS operates a Bluesky labeling service, we are not labeling anything as yet. We purely, we took their open source moderation tooling, stood it up, and we've been looking at moving the sort of core common harms and definitions into that to start playing with, with that. But what's interesting about that is a people are subscribing to if to us as a moderator, and we're not doing anything, and they're not caring because they've shopped for numerous of these labeling services, and they're meeting their needs. But more interestingly, folks on Bluesky or an app proto instance that subscribe to, for example, the IFTAS labeler can say, I'm interested in your label for explicit content. And then we, you know, look through the content and we label it however we do, and then we pass that information back to the client, not the instance. And the client is configurable to interact with that label. So you not only get to shop for the labeling of your choice, you get to shop what happens with it. So you know explicit content with the comment about different kinds of communities, some, all, some, none or hide, blur, but that's at the client level. You get to not only shop for who you trust, to flag or label content, you also take on our defaults, which are then configurable by you as the end user of what to do as you ingest that label. Fascinating. 

Mike McCue:

Yeah, that's fantastic. So there's an election coming up in the United States that I'm sure is going to create a fair amount of challenges from moderation point of view. As you talk to instance owners, you look at this from you know, your professional point of view, what are you most concerned about as we go into this election cycle in the fediverse? 

Samantha Lai:

Coordinated inauthentic behavior, the fediverse is not well equipped to deal with it because, in So, in order to how centralized platforms deal with coordinated inauthentic behavior, it includes longitudinal threat longitudinal tracking of who the threat actors are and what they're up to. That's super resource intensive. There's the issue of having centralized telemetry, which means having an overview of what accounts are on the service and what their activity is. When you are an admin or moderator, you only have access to what's going on on your one server. So if someone wants to really cause chaos, it's a lot easier for them to hide among the crowds if they have, say, one or two accounts in each server, as opposed to, you know, 100 accounts posting the same thing on one's like service. So there are many ways in which we aren't ill equipped. And I think it was Jaz, actually, who pointed out that one or two servers that might have already tried to do something funny before, those weren't very effective. And I'd also be careful to kind of overhype the threat, because just because a coordinated inauthentic like a set of coordinated inauthentic accounts exist, doesn't mean that they're necessarily effective. A lot of them often just kind of do their own thing within their own little crowds. But if something is effective right now, we don't really have the infrastructure to detect or handle them in any meaningful way.

Jaz-Michael King:

I believe the number of nations holding meaningful elections this year is more than in human history in one year. It is an incredibly interesting time for for humanity at large, around the planet this year, and we've been approached, and many are having the conversation around what to be concerned about, especially with election interference. We have seen coordinated inauthentic activity, for sure, we've seen servers dedicated to just trying to inject content into the fediverse. We've seen we have seen coordinated accounts. We monitored someone or a group setting up one or two accounts on dozens of servers, attaching them to an LLM creation tool that was generating sort of harmless look at the beach, followed by a particular message, but a particular influence that they were interested in pushing. And you know, that was just shoe leather, no automated abilities to either observe it, detect it, or report it. But it's really happening.

I will say that, I think far too few people understand just how many organizations are pulling in all of the content on the fediverse, not using activity pub, or certainly not bilaterally communicating with ActivityPub and observing and investigating disinformation, misinformation, interference, all manner of activities and these folks are operating for their own benefit. For the world at large, they're not mastered on people or activity by people or fediverse people. They're disinformation people, and they're monitoring numerous platforms, and the fediverse is one of them. And they are pulling everything in, and they are examining it. And the Federalist is not generally benefiting from from their findings, although, you know, we are working on that again, we we have two thirds of that figured out. It's really just a question of how to get structured intelligence out to folks and then what, what we want them to do with it, but it's certainly top of mind for many folks, and it's we've put quite a few resources up on how to research something that you think might be disinformation or election interference activity, so it's definitely being discussed and thought of, but I will end that with but we built a fediverse, not algorithmic, chronological timeline, and you see what you follow and what your friends follow, and for the most part, to Samantha's comments on how well these, these actors can even do in the fevers. It's not that well, because there's no virality. If you don't follow them, no one sees them. So it's been interesting. Of course, it's a small audience, 15 million accounts, one 1.5 active monthly. So it's not a rich pond to be fishing in to generate the influence you might be seeking to but even if you were, it's just really hard to reach those people, because you can't buy your way to the top. You can't boost your post with some dollars. You can't make something viral. So it's, it's a little bit, I want to be very, very cautious here. It's a little bit self healing. 

Mike McCue:

Samantha, what's your take on Bluesky’s custom feeds, and how that can you know be helpful or more challenging from a moderation point of view?

Samantha Lai:

Whenever you have feeds or whenever you have algorithms, you become more vulnerable to what, as Jaz pointed out, kind of like algorithmic manipulation, inauthentic campaigns trying to boost content for more engagement. But I also just kind of bringing us back to Jaz’s answer that when you have custom feeds, or when there are more ways for people to choose how they want things algorithmically ranked, there can also be inherent resilience in the system that very much depends on how people use it. So I still feel like we're a little early and like how people are kind of choosing to use these tools to really tell anything. But it's just exciting in terms of how it allows people to, again, like, customize their own experiences in terms of what they do and don't want to see. 

Jaz-Michael King:

Samantha and I were in the room with a large company that has large algorithms recently, and it was over the lunch break that it dawned on me very clearly something that had been sort of easing away at the edges all day. The algorithmic choices available on big social are so diametrically opposed to the intent of the fediverse, and quite simply, they want as many eyeballs for as many minutes, right? That that is the value proposition that they can sell to investors and advertisers and then power to you. Good, good, good stuff. Keep going. The fediverse,when, when we have more eyeballs for more minutes, we pay that bill, right? If I, if I accept your, your toxic content, I'm literally paying to host it and transmit it to others. And the core intent is to facilitate meaningful human interaction, I believe is what I see the most, and so unfortunately, those the big bad algorithms have generated such a bad name for themselves that it's very hard to have that conversation in the Federer's with with some portion of that audience. But we have, you know, we have the Explore feed. There's some algorithm there. There's something happening there to tell you, this is trending, and this is, you know, more people are interacting with this as a hobby. I administer the Welsh instance. The moderation team handles that. I don't, I don't do the moderation there, but I'm still the admin. I've got the buttons, and so recently, I just sent the question to the community, what is it you like about this? Why do you like it? And let me know if I can tell other people. And I was really struck by how many of those responses were. I get to see the things I want to see. I get to see the things I asked to see. No one's telling me to look at something I didn't want to see. And of course, you know, I'm aware of that. I've long been aware of that. But it was so really visceral how folks were very clear on something that you think is sort of a big, hokey technology problem. And these were, you know, people doing crochet and walking in the park. And these aren't your sort of activity pub tech nerd. These are just folks who just want to have some fun, read some content, read some posts, maybe post themselves. And I would say, you know, a good fifth to a quarter of those responses were to that specificity. I get to see what's interesting to me. I get to see what I asked for. So I think there's room, as you've both mentioned, for some form of algorithms, algorithms that can be opted into configured, tweaked. We run one on Bluesky, of course, for trust and safety, there's the trust and safety feed, and to some degree, Mike Flipboard is an algorithm, right? I mean, this is, this is a curated feed of wonderful content, and you can opt into that, and that drives what's coming in. So I would love to see more and more conversations around acceptable algorithms and and I think, seeing as he invented the internet, we should just rebrand them Al Gore rhythms, and maybe that can just Make them a bit more palatable.

Mike McCue:

Well, and then not having one algorithm to rule them all that is controlled by, you know, these, you know, profit interests that you know, in a completely non transparent way, right? That is, that is a fundamental strategic benefit of the fediverse. And I'm very excited to see how that all plays out in the coming months and years. 

While we are the fediverse is inherently decentralized. That's what it's all about. There's also a central group of people, you know, Team Fediverse, who's working across all of these instances and disciplines and, you know, platforms to just try to do the right thing collectively for the betterment of the whole and that's incredibly exciting. I think I'm incredibly grateful for the work that you've done Jaz with IFTAS and IFTAS connect, and Samantha, I think the research that you've been doing and the paper you wrote is incredibly important. I highly encourage we'll have a link to it in the show notes. I highly encourage people to check that out.

This is obviously an ongoing topic. There's going to be a lot of, you know, twists and turns, especially in this election year. So it was great to get a chance to come connect with you guys and learn more about this, and thank you for the work you've done and thank you for being on Dot Social. 

Samantha Lai:

Thank you for having us. 

Jaz-Michael King:

Yeah, thanks, Mike, wonderful to have conversation. And don't tell yourself too short on just how much you're contributing to pushing the fediverse forward too.

Well, thanks so much for listening! You can find Samantha at @samlai.bsky.social and Jaz at @jaz@mastodon.IFTAS.org 

You can follow Mike on Mastodon at @mike@flipboard.social and @mike@flipboard.com

Big thank you to our editor, Rosana Caban and Anh Le.

To learn more about what Flipboard's doing in the fediverse, sign up via the link in this episode’s description.

Until next time, see you in the fediverse!