Journoist
Last updated: January 2026
1. Purpose of This Policy
Journoist uses technology, including artificial intelligence (AI), as a tool—not a substitute for journalism.
This policy explains where AI may be used, where it is prohibited, and how accountability is maintained. Readers deserve to know how work is produced, especially when automation is involved.
2. Core Principle
Journoist is human-led, editor-controlled, and accountable.
AI does not:
- Make editorial decisions
- Replace reporting
- Determine facts
- Publish autonomously
Every published piece is the responsibility of a human editor.
3. Permitted Uses of AI
AI tools may be used in limited, supervised contexts, including:
- Language assistance (grammar, clarity, structure)
- Summarization of already-reported material
- Translation and transliteration support
- Headline or metadata brainstorming (not final authority)
- Data organization or pattern identification
- Research assistance for non-exclusive, publicly available information
AI may assist the process. It may not define the outcome.
4. Prohibited Uses of AI
Journoist does not use AI to:
- Generate original news reports presented as reporting
- Conduct interviews or fabricate quotes
- Invent sources, documents, or data
- Rewrite third-party journalism to bypass attribution
- Produce investigative findings without human verification
- Publish AI-generated content without editorial review
- Mimic real individuals’ voices or identities
Fabrication—whether human or machine-assisted—is unacceptable.
5. AI and Investigative Journalism
Investigative reporting at Journoist is human-driven.
AI may help:
- Sort large datasets
- Flag anomalies
- Organize documents
AI may not:
- Draw conclusions
- Assign intent
- Replace source verification
- Determine factual truth
All investigative conclusions are independently verified by journalists.
6. Transparency to Readers
Journoist discloses AI use when it is:
- Substantial
- Material to understanding how content was produced
Routine editorial assistance (such as spell-checking or grammar refinement) is not individually disclosed, consistent with industry standards.
When AI plays a meaningful role, disclosure is explicit.
7. Accuracy and Verification
AI outputs are treated as unverified input, not facts.
- All facts must be verified independently
- Editors are responsible for cross-checking AI-assisted material
- No AI output is published without human review
Errors introduced through AI are corrected under Journoist’s Corrections Policy.
8. Bias and Limitations
Journoist recognizes that AI systems can:
- Reflect systemic bias
- Produce confident but incorrect outputs
- Omit context or nuance
Editors are trained to treat AI with skepticism, not deference.
AI does not replace judgment.
9. Data, Privacy, and Security
Journoist does not knowingly input:
- Confidential sources
- Unpublished sensitive documents
- Personal data of private individuals
into AI systems that cannot guarantee data protection.
Source confidentiality is non-negotiable.
10. Accountability
If AI-assisted content is found to:
- Mislead readers
- Introduce factual errors
- Violate ethical standards
Journoist will:
- Correct the record
- Disclose the failure
- Review internal processes
Responsibility always rests with human editors—not tools.
11. Policy Review
This policy will be reviewed regularly as technology evolves.
Journoist prioritizes trust over speed, verification over novelty, and human accountability over automation.
