Effective date: October 25, 2025
Applies to: SquadThree ("we," "us," or "our") and the Dispatch Rewind mobile/ web services, including all user-generated content (UGC) features, messaging, profiles, comments, uploads, links, images, audio, video, livestreams, and in‑app interactions.
Zero tolerance. We prohibit any content or conduct that sexually exploits or endangers children or young people. We remove such material, report it to appropriate authorities, and permanently disable involved accounts. We design our product, moderation, and operations to prioritize child safety.
1. Definitions
Child / Minor: Any person under the age of 18, or under the age of majority in their jurisdiction if higher.
CSAM (Child Sexual Abuse Material): Any visual or textual depiction of a child engaged in sexual activity or sexualized nudity, or any representation created, adapted, or modified to appear that a child is engaged in sexual activity (including AI‑generated, synthetic, animated, or illustrated content).
Grooming / Enticement: Any act to befriend, manipulate, coerce, or pressure a child to obtain sexual content, engage in sexual activity, share personal information, or meet offline.
Sextortion: Coercing a person (including a minor) to provide sexual content or money, or to perform sexual acts under threat of exposure, doxxing, or harm.
UGC: Any content submitted or generated by users, including posts, comments, messages, media, links, usernames, and profile data.
2. Our Commitments
Legal Compliance: We comply with applicable child safety laws, including reporting obligations. In the United States, when we become aware of apparent CSAM on our service, we will promptly report to the National Center for Missing & Exploited Children (NCMEC) CyberTipline as required by 18 U.S.C. §2258A. Outside the U.S., we report to relevant hotlines and/or law enforcement (e.g., IWF or local equivalents).
Prevention by Design: We employ safeguards to reduce the risk of CSAE, including age‑appropriate defaults, limited discovery of minors, private-by-default options for young users, and protective friction for features that could be misused.
Swift Enforcement: We remove prohibited content and disable involved accounts. We preserve evidence securely and cooperate with law enforcement within applicable legal frameworks.
Continuous Improvement: We regularly test, audit, and strengthen detection, review, and reporting processes. We publish transparency about CSAE enforcement where feasible.
Safety Culture: We train our workforce on CSAE detection and response, provide access to mental‑health resources for moderators, and require vendor compliance with this standard.
3. Prohibited Content and Conduct
The following are strictly prohibited on our service:
Any CSAM, including but not limited to: images, audio, video, text, emojis, ASCII art, or deepfakes that depict or sexualize minors; child nudity (including partially clothed minors presented in a sexualized manner); sexualized commentary about minors; or links to such material.
Grooming, enticement, or solicitation of sexual content or contact with a minor; attempts to obtain sexual images; attempts to meet a minor for sexual purposes.
Sextortion or threats to share intimate images of a minor; blackmailing or coercion targeting minors.
Sexualization of minors in any form, including AI‑generated or manipulated media, drawings, animations, or text.
Age‑misrepresentation to gain access to minor users, communities, or features intended for minors.
Instructions that facilitate CSAE, including evasion of our safety controls or law enforcement.
4. Product & Platform Safeguards
We implement and iterate on the following safeguards (where applicable to our features):
Age Assurance & Young User Protections: Reasonable measures to deter under‑age access to adult features; stricter defaults for younger users; limited public visibility of minor accounts.
Private Messaging Controls: Message requests, rate limits, media‑send restrictions, and abuse detection signals; options to block and report; protective friction for unsolicited contact.
Discovery & Recommendations: Reduced recommendation and search exposure for accounts that appear to belong to minors; suppression of risky queries; proactive blocks on known abusive terms.
Upload & Link Scanning: Hash‑matching of uploads against industry databases (e.g., PhotoDNA‑type systems, where available); classifiers/heuristics for risky content; automated blocking of known CSAE domains/URLs.
Re‑Upload Prevention: Perceptual hashing/fingerprinting to prevent resurfacing of previously removed CSAE material.
Rate Limiting & Anomaly Detection: Limits on high‑risk behaviors (e.g., mass messaging, rapid friend requests, or repeated outreach to presumed minors).
User Controls: Simple blocking; easy‑to‑find reporting; safety help center content; default privacy settings for younger users.
Human‑in‑the‑Loop Review: Trained specialists handle escalations and complex cases; dual‑review for high‑severity decisions when feasible.
5. Reporting & Response Process
How to report: Users and non‑users may report CSAE concerns via in‑app reporting or web form: https://dispatchrewind.com/contact.
Account Actions: Content removal; feature restrictions; temporary suspension; permanent account disablement; device/IP bans for severe or repeated violations.
Evidence Preservation: We maintain necessary logs and evidence in secure storage with restricted access for a minimum of 90 days after reporting, or longer if legally required, then delete securely unless preservation is requested by authorities.
5. Law‑Enforcement & Hotline Reporting
United States: We submit reports of apparent CSAM to NCMEC CyberTipline as required by 18 U.S.C. §2258A. We also cooperate with law‑enforcement requests consistent with applicable law.
Other Regions: We notify relevant hotlines (e.g., IWF in the U.K., or local INHOPE members) and/or local police as appropriate.
We do not notify offenders. We may notify affected users when legally and operationally appropriate.
6. Enforcement Scope & Reservation of Rights
This CSAE standard applies to all users and uses of our service. We may remove any content or account that we reasonably believe violates this policy or puts children at risk, including suspected grooming patterns or repeated boundary‑testing behavior. Nothing in this policy limits our ability or obligation to report to authorities.