• Technically off-topic, but the sub's been slow, so I'll allow it. (Another mod might feel differently.)

    intended for whistleblowing and journalistic discussion.

    Does this mean that you'll remove content about other subjects? I'm recalling an Anonymous forum from years ago that was intended for sharing information about doxing and other Anonymous techniques, but due to lax moderation policies, it gradually got taken over by carders, and then got shut down. And there have been privacy-centric tools intended for beneficial uses that wound up being used almost exclusively by people sharing CSAM. In short, what you intend and what actually happens may be very different, so you'll want to put a lot of thought and research into what you will or won't allow, and make your policies clear to users from the get-go.

    If you intend to enforce limits on types of content, how will this happen? Of course content moderation is a whole big topic in itself, and will be more difficult if you're trying to minimize data retention. How will you deal with a problematic user who keeps posting prohibited content? Or attempts to disrupt the platform itself?

    I realize you're not talking about a VPN, but this list of questions from TorrentFreak is a good starting place for questions you should have answers to.

    If you need to bring on employees or volunteers, what precautions are in place to make sure they're not government agents, blackmailers, or others with ulterior motives?

    Thanks for letting me post; I don’t do a lot of posting here.

    This is a fair and important set of questions, and you’re right that intent alone doesn’t determine how a platform evolves.

    DAR’s purpose (that’s the forum’s tentative name) is narrow but not exclusionary. It is intended primarily for whistleblowing, investigative journalism, and discussion of institutional misconduct. That doesn’t mean every post must strictly fall under those labels, but it does mean the platform is not meant to be a general-purpose anonymous forum.

    To your first question: yes, there are limits, and they are explicit.

    DAR will not allow: • Content that is illegal under Swiss law (including CSAM, trafficking, or credible threats of violence) • Direct facilitation of fraud, financial crime, or exploitation • Targeted harassment campaigns or doxing that lack a clear public-interest justification • Attempts to weaponize the platform for disruption, coercion, or spam

    Outside of those boundaries, DAR intentionally errs on the side of speech — especially speech related to accountability, corruption, abuse of power, or matters of public concern.

    Moderation under data minimization

    You’re also right that moderation becomes harder when retention is limited — and that tradeoff is intentional.

    DAR’s approach is: • Content-based moderation, not user-based profiling • Short retention windows (posts expire automatically) • No persistent user accounts • No long-term behavioral tracking

    Problematic content is addressed at the post level, not by building dossiers on users. If someone repeatedly posts prohibited material, their posts are removed when identified, but the system does not attempt to construct a long-lived identity around them. That’s a limitation by design.

    This does mean DAR will never be perfectly “clean.” The goal is risk reduction, not total control.

    On platform capture and misuse

    You’re absolutely correct that anonymous platforms can drift or be captured. DAR’s countermeasures are structural, not reactive: • The scope is deliberately narrow and documented up front • Content that clearly falls outside that scope is removed • There is no economic incentive for high-volume abuse (no ads, no engagement farming) • Ephemerality limits long-term exploitation

    DAR is not trying to be everything to everyone — that’s how platforms lose their center of gravity.

    Staffing, volunteers, and trust

    At least initially, DAR is operated by a very small footprint. There is no open moderator recruitment, no volunteer moderation pool, and no internal access to historical datasets because those datasets don’t exist.

    Any future contributors would have: • Minimal technical access • Compartmentalized responsibilities • No visibility into user metadata (because it isn’t retained)

    This is not a “trust us” model — it’s a “there’s nothing to misuse” model.

    You’re right to cite examples where privacy-centric tools were abused and shut down. DAR is built with those failures in mind. It doesn’t promise invulnerability, purity, or perfect outcomes — only that it is deliberately constrained, transparent about those constraints, and honest about the tradeoffs.

    I appreciate the skepticism. Platforms like this should earn trust slowly, not demand it up front.

    I'm glad you're thinking about these issues. I hope you'll be able to keep the platform true to its purpose.

    Oh, and I recalled a couple other platforms I've seen come and go over the years, which I described briefly here. Just as more food for thought.

    This is great information. I’ll keep you posted. Thanks again.