aiAllure Logo

Content Moderation Policy

Last Updated: February 24, 2026

aiAllure maintains a structured content moderation system to ensure platform safety, legal compliance, and the protection of all users. This document describes our moderation processes, review standards, and enforcement procedures.

1. Moderation Overview

Our moderation system operates on three tiers:

Tier 1: Automated Detection

AI-powered filters that automatically scan prompts, outputs, and user interactions for policy violations. Includes keyword filtering, pattern detection, and content classification.

Tier 2: Flagged Content Review

Content flagged by automated systems or user reports is queued for human review. Trained moderators evaluate flagged content against our policies without undue delay.

Tier 3: Escalation & Senior Review

Complex, ambiguous, or high-severity cases are escalated to senior compliance staff or legal counsel for final determination and enforcement decisions.

2. Flagged Content Review Process

When content is flagged:

  1. Intake: The report or automated flag is logged with a unique case ID, timestamp, and category classification.
  2. Triage: Reports are prioritized by severity. Critical reports (minor safety, non-consensual content) receive immediate priority.
  3. Review: A moderator examines the flagged content, context, user history, and applicable policies.
  4. Decision: The moderator determines whether a violation occurred and selects the appropriate enforcement action.
  5. Documentation: All decisions, actions, and reasoning are logged in the audit trail.
  6. Notification: Where applicable, the reporting user and/or violating user are notified of the outcome.

3. Escalation Process

Cases are escalated when:

  • The content involves potential criminal activity (CSAM, terrorism, trafficking)
  • The case requires legal analysis or jurisdictional expertise
  • The user disputes the initial moderation decision
  • The violation involves a high-profile or public-interest dimension
  • Multiple related reports indicate a coordinated or systemic issue

Escalated cases are reviewed by senior compliance staff as a matter of priority and may involve external legal counsel.

4. Content Removal Policy

Content is removed when it violates our Prohibited Use Policy, Community Guidelines, or applicable law.

Removal actions include:

  • Immediate removal: For content involving minors, non-consensual intimate content, or imminent harm.
  • Reviewed removal: For content flagged as potentially violating, pending moderator confirmation.
  • Takedown: For valid copyright takedown notices (under EU Copyright Directive or US DMCA), or law enforcement requests.

Removed content is logged and preserved as required by law. Users are notified of removals with the reason and applicable policy.

5. Enforcement Actions

Enforcement is proportionate to the severity and frequency of violations:

ActionWhen AppliedDuration
Content RemovalAny confirmed violationPermanent
WarningFirst minor offenseRecorded permanently
Feature RestrictionRepeated minor offenses7-30 days
Temporary SuspensionSerious violations, 3+ warnings7-30 days
Permanent BanSevere violations, criminal contentPermanent + law enforcement referral

6. Enforcement History & Records

All enforcement actions are permanently logged in our compliance audit trail, including:

  • Timestamp of the action
  • Report ID and case ID
  • Category and severity of violation
  • Decision made and reasoning
  • Identity of the reviewer
  • Actions taken (content removal, warning, suspension, ban)
  • Notification sent to affected parties

These records are maintained for a minimum of 5 years to satisfy legal and regulatory requirements and to demonstrate defensible compliance practices.

7. Appeals Process

Users may appeal moderation decisions by:

  • Emailing hello@aiallure.com within 30 days of the enforcement action
  • Including the case reference number, a description of why the decision should be reconsidered, and any supporting evidence

Appeals are reviewed by a different moderator than the one who made the original decision. Decisions on appeals are final and communicated within 14 business days.

8. Transparency

We are committed to transparency in our moderation practices. We maintain internal records of:

  • Total reports received and resolved
  • Average response time
  • Actions taken by category
  • Number of appeals and their outcomes

9. CSAM Mandatory Reporting Procedure

Zero-Tolerance Policy — Immediate Action Required

aiAllure maintains an absolute zero-tolerance policy toward Child Sexual Abuse Material (CSAM). Any suspected CSAM triggers mandatory reporting obligations under EU Directive 2011/93/EU, Czech Act No. 40/2009 Coll. (Criminal Code, §192–193), and applicable international law.

9.1 Detection

Suspected CSAM may be identified through:

  • Automated AI-powered content scanning and age-estimation systems
  • Keyword and prompt filtering (blocklists for minor-related terms)
  • User reports via the Report button, Report Abuse page, or email
  • Manual review during content moderation
  • IdentityForge™ upload validation (automated age verification rejects images depicting minors)

9.2 Immediate Response (without undue delay upon detection)

  1. Content isolation: Suspected content is immediately removed from all user-facing systems and quarantined in a secure, access-restricted evidence store
  2. Account suspension: The associated user account is immediately suspended pending investigation
  3. Evidence preservation: All associated data (account details, IP addresses, device identifiers, session logs, uploaded images, generated content, chat history, payment records) is preserved under legal hold per our Data Retention Policy

9.3 Mandatory Reporting (without undue delay)

Reports are filed with the following authorities as applicable:

  • NCMEC CyberTipline (US) — missingkids.org/cybertipline — for content accessible to or originating from US users
  • Czech Police (Policie ČR) — National Cyber Crime Centre (Národní centrála proti organizovanému zločinu, NCOZ) — as required under Czech law for operators based in the Czech Republic
  • Europol / EC3 — European Cybercrime Centre, for cross-border cases within the EU
  • INHOPE — via the relevant national hotline member — for international coordination

Each report includes: a description of the content, all preserved evidence, user account details, IP and access logs, timestamps, and the method of detection.

9.4 Post-Report Actions

  • The user account is permanently banned (no appeal)
  • All content associated with the account is permanently deleted from production systems (evidence copy retained under legal hold only)
  • The incident is logged in the compliance audit trail with full chain-of-custody documentation
  • Internal review is conducted to identify and improve detection gaps

Staff training: All personnel with access to user content or moderation tools receive mandatory training on CSAM identification, evidence handling, and reporting obligations at onboarding and annually thereafter.

10. Non-Consensual Intimate Imagery (NCII) & Deepfake Response Procedure

This procedure applies to reports of non-consensual intimate imagery, including AI-generated synthetic intimate imagery (“deepfakes”), which may constitute a criminal offense under Czech Criminal Code §193b and equivalent laws in other jurisdictions.

10.1 Report Intake

Reports may be submitted by:

  • Depicted persons (or their authorized representatives) — no account required
  • Platform users via the Report button or Report Abuse page
  • Law enforcement authorities via official channels
  • Email to hello@aiallure.com

10.2 Response Timeline

StepActionDeadline
1Acknowledge receipt of report to complainantWithin 4 hours
2Content removed / access restricted pending reviewWithin 12 hours
3Full investigation (identity verification, content analysis)Without undue delay
4Final decision communicated to complainantWithout undue delay
5Law enforcement referral (if criminal conduct suspected)Without undue delay

10.3 Enforcement Actions

Upon confirmation of an NCII/deepfake violation:

  • All content depicting the complainant’s likeness is permanently removed
  • All associated reference images (IdentityForge™ uploads) are permanently deleted
  • The offending account is permanently banned
  • Evidence is preserved under legal hold for law enforcement (per Data Retention Policy, Section 5)
  • Where the content appears to violate Czech Criminal Code §193b or equivalent laws, a referral is made to Czech Police (NCOZ) and/or the relevant national authority of the depicted person’s jurisdiction

10.4 Proactive Prevention

  • IdentityForge™ uploads undergo automated face verification, age estimation, and celebrity detection before any content is generated
  • Users must warrant consent and ownership of uploaded images (Terms of Service, Section 5)
  • AI-generated outputs are watermarked and labeled per our provenance commitments (Terms of Service, Section 9e)
  • Repeat offenders’ device fingerprints and IP addresses are blocklisted to prevent re-registration

11. Contact

For questions about our moderation practices:

Moderation: hello@aiallure.com
Appeals: hello@aiallure.com
Abuse Reports: hello@aiallure.com

Novera Group s.r.o.
Rybná 716/24
CZ-110 00 Praha 1