Hey everyone, just wanted to give a quick update.

After opening things up more on Plebbit/Seedit, we got hit pretty hard with spam and some NSFW content. It got out of hand fast and honestly, its worse than we expected.

To stop that from messing everything up, we’re thinking about adding optional email or SMS verification when people sign up.

This isn’t something we wanted to do at first, but it seems necessary to protect the space and avoid getting buried in garbage.

we’re still fully open source, and still want this to feel like a community. If you’ve got other ideas or feedback, feel free to share.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    I’ve thought about this idea for my own project, and my best solution is to have a network of trust where people rely on curation from their peers and thus only see the content their peers have approved.

    The main benefit is also the main downside: content you disagree with is still there, you just don’t see it. That means there could absolutely be pockets of CSAM and other content on the network, but your average user wouldn’t have that on their system since they only store curated content.

    I’m not sure how I feel about that, but I think it’s the best you can do without centralized moderation.

    • Esteban Abaroa @lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      we plan on having as many “web of trust” like features as possible at some point, like for example you could get recommended content / communities that users you upvote participate in, this can be implemented easily, and it’s very open and P2P.

      but in our opinion it’s not technically possible to have moderation/discovery that is fully web of trust for a few reasons:

      • you need to bootstrap from somewhere, you can’t just start “syncing/downloading” content. randomly, and then start manually liking/disliking stuff to build your personal web of trust from scratch. people dont want to download gbs of data and like/dislike stuff for hours to get started.

      • pure web of trust is easily gameable, you can make millions of bots that upvote each other to rank themselves to gain better rank in other people’s web of trust.

      • pure web of trust doesn’t have DDOS resistance, someone can completely DDOS the gossip network and prevent you from ever bootstrapping a real web of trust.

      also assuming someone would develop a scalable, ux friendly and ddos resistant pure web of trust algorithm, it probably would have a UX that’s very different from reddit (and message boards in general), and our goal is to recreate the UX of reddit/message boards exactly, because we like them. The thing we don’t like about them is the centralization/commercialization/etc. For example we don’t like that reddit killed apollo/rif, we don’t like that they ban very popular subs that a lot of people enjoy, etc.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        bootstrap

        Sure, so bake in a set of default “mods” whose influence goes away as people interact with the moderator system. Start with a CSAM bot, for example (fairly common on Reddit, so there’s plenty of prior art here), and allow users to manually opt-in to make those moderators permanent.

        pure web of trust

        I don’t think anyone wants a pure web of trust, since that relies on absolute trust of peers, and in a system like a message board, you won’t have that trust.

        Instead, build it with transitive trust, weighting peers based on how much you align with them, and trust those they trust as bit less, and so on.

        easily gameable

        Maybe? That really depends on how you design it. If you require a lot of samples before trusting someone (e.g. samples where you align on votes), the bots would need to be pretty long-lived to build clout. And at some point, someone is bound to notice bot-like behaviour and report it, which would impact how much it impacts visible content.

        DDOS

        That can happen with any P2P system, yet it’s not that common of a problebut

        it probably would have a UX that’s very different from reddit

        I don’t see why it would. All you need is:

        • agree/disagree - by default, would have little impact on moderation
        • relevance up/down (this is your agree/disagree metric)
        • report for rules violation (users could tune how much they care about different report categories)
        • star/favorite - dramatically increases your trust of that user

        Reddit/lemmy has everything but a distinction between agree/disagree and relevant/irrelevant. People tend to use votes as agree/disagree regardless, so having a distinction could lead to better moderation.

        You’d need to tweak the weights, but the core algorithm doesn’t need to be super complex, just keep track of the N most aligned users and some number of “runners up” so you have a pool to swap the top group with when you start aligning more with someone else. Keep all of that local and drop posts/comments that don’t meet some threshold.

        It’s way more complex than centralized moderation and will need lots of iteration to tune properly, but I think it can work reasonably well at scale since everything is local.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    2
    ·
    22 hours ago

    Seedit is a serverless, adminless, decentralized reddit alternative. Seedit is a client (interface) for the Plebbit protocol, which is a decentralized social network where anyone can create and fully own unstoppable communities.

    In the plebbit protocol, a seedit community is called a subplebbit. To run a subplebbit, you can choose between two options:

    First, they take the dinglebop, and they smooth it out with a bunch of schleem. The schleem is then repurposed for later batches. They take the dinglebop and they push it through the grumbo, where the fleeb is rubbed against it. It’s important that the fleeb is rubbed, because the fleeb has all of the fleeb juice. Then a schlami shows up, and he rubs it and spits on it. They cut the fleeb. There’s several hizzards in the way. The blamfs rub against the chumbles. And the ploobis and grumbo are shaved away. That leaves you with a regular old plumbus.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    22 hours ago

    where anyone can create and fully own unstoppable communities.

    One man’s “unstoppable communities”, are another man’s “spam and NSFW content” 🤷

    I think you need to make up your mind what you actually want. Gatekeeping access under the guise of fighting spam seems directly contradictory to the stated goal of your project, and assuming it wouldn’t be flooded with such content when most of the internet consists of such was pretty naive.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    22 hours ago

    After opening things up more on Plebbit/Seedit, we got hit pretty hard with spam and some NSFW content. It got out of hand fast and honestly, its worse than we expected.

    It’s a p2p decentralized social network. I’m honestly surprised you got anything that’s not kiddie porn and drug spam.

  • hddsx@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    I integrated cloudfares captcha thingus to foregejo and my spam dropped to 0

  • hsdkfr734r@feddit.nl
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    I’m not sure which clients are used to connect. Perhaps some proof of work challenge for the connecting client to solve first? Anubis does this for http(s) and browsers. I’ve seen it in the wild quite often in the last weeks, so it seems to be effective (until the scrapers learn to use selenium to mimic browsers or so).

  • AllYourSmurf@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    I’d suggest looking at 3 things:

    1. CAPTCHA. Not a perfect solution and AI will beat most of it soon, but it will help.
    2. Anti-bot tools. Something that will do the equivalent of miring up AI web crawlers.
    3. Identity systems. Not in the sense of a verifiable ID like a driver’s license, but in the sense of establishing a strong link between a pseudonymous ID and the community it owns or interacts with.