When a Church Nails AI Ethics (From a Privacy Hawk Who's Agnostic)
By: Casey Cannady : technologist, traveler & unapologetic privacy hawk
I'm an agnostic, anti–organized religion privacy hawk who's spent decades in the surveillance-and-enterprise-tech trenches.
So I did not expect to say: "Hey, this church's AI guidance is actually a solid starting point for responsible AI use."
Yet here we are.
The Church of Jesus Christ of Latter-day Saints recently published guidance on artificial intelligence. Strip out the theology, and you're left with four principles that are surprisingly useful for anyone who cares about privacy, surveillance, and not turning your life over to SaaS vendors:
- Use AI in positive, helpful ways that uphold your integrity
- AI cannot replace the individual
- Leaders should not rely on AI to provide advice on sensitive matters
- Sensitive information should not be entered into AI tools
Doesn't sound radical, right? But when you work in surveillance-tech-adjacent spaces... whether that's endpoint management, retail analytics, or AI tooling... you start to notice how few people think critically about data pipelines, model training practices, or the second- and third-order privacy implications of their AI use.
1. "Use AI in positive, helpful ways that uphold your integrity"
What this actually means
Don't let AI pretend to be you. Don't let it write your emails with a 'voice' that isn't yours, craft fake authenticity for LinkedIn posts, or generate responses that deceive others about what you know, what you're capable of, or whether a human was actually involved.
Integrity means people should reasonably expect that the person they're communicating with is who they think it is, and that you're taking responsibility for what you put your name on.
Where this collides with the surveillance economy
The more you let AI systems ghost-write your communication, make your decisions, or infer your intent, the more those systems (and the companies running them) know about you... and the more your behavior becomes predictable, modelable, and monetizable. If you blur the line between "me" and "AI-assisted me," you're also blurring the line between private thought and logged data feed.
2. "AI cannot replace the individual"
What this really means
You are still responsible. AI is a tool, not an actor. It doesn't 'think' in any meaningful sense; it synthesizes patterns from data and gives you plausible outputs. If you treat it like an authority or let it become your substitute decision-maker, you're offloading judgment you should be keeping for yourself.
Surveillance and power asymmetry via outsourced cognition
Here's the kicker: letting AI "replace" your thinking isn't just ethically iffy... it's a privacy and autonomy risk multiplier. Humans are legible through their outputs, but AI prompts are even more so because they log your decision process in real-time. Every time you let AI mediate your communication, your decision-making, or your relationships, you're also increasing what gets logged, what can be inferred, and what can be repurposed later.
If you let AI replace your own thinking instead of assist it, you get de-skilled, you become more predictable and model-able, and you normalize the idea that 'the system decides, we comply'.
For neurodivergent folks especially
If you're AuDHD, AI can be a fantastic support: turning chaos into bullet-pointed clarity, helping break tasks down, and drafting communication when word-finding is hard. But if you let AI be your primary voice and decision-maker, you risk losing your own authentic style and pattern recognition while creating a detailed behavioral log of your internal life... gold for profiling if mishandled.
3. "Leaders should not rely upon AI to provide advice (medical, financial, legal, other sensitive matters)"
Why this is more than "AI might hallucinate"
Advice is power allocation. When you outsource high-stakes advice to an opaque system owned by someone else, you're shifting power... and data... to actors whose incentives you don't control. Advice often requires sensitive context: detailed financial data, health info, legal situation specifics, and HR or interpersonal conflicts. That's precisely the kind of data that, if leaked or misused, can hurt you long-term.
If you're a leader... whether in a company, a project, or a small distributed team... you own the consequences, not the model vendor. 'The AI told me' is not a legal defense. You may be violating duties of care by using a generic AI tool instead of a licensed professional for others' serious issues, and you're quietly training vendors on your edge cases.
4. "Sensitive information should not be entered into AI tools"
This is the one every privacy hawk wants printed on posters
On this point, the LDS guidance lines up with what serious privacy regulators and experts say: don't paste sensitive data into random AI tools. AI pipelines routinely involve web scraping, repurposing data far beyond its original context, and data leakage risks in large, attractive data stores.
'Don't paste sensitive data into random AI tools' isn't paranoia. It's basic data hygiene.
What counts as "sensitive" in the real world?
- Medical records and health history
- Financial statements tied to identity
- IDs, passports, SSNs, driver's license numbers
- Passwords, API keys, internal URLs, VPN details
- Detailed travel plans, addresses, and routines
- Internal HR complaints or performance issues
- Full, unredacted contracts and NDAs
- Detailed internal architecture diagrams and security configs
Safer patterns
- Redact and abstract wherever possible
- Prefer environments with explicit data protections (enterprise contracts, self-hosted models)
- Design for minimization: only send what's absolutely necessary
A Privacy Hawk's AI Use Principles
- AI is a tool, not an authority. It can suggest and synthesize; it doesn't get the final say, especially on high-stakes questions.
- I will not use AI to deceive. No synthetic 'authenticity,' no hidden manipulation, no quiet profiling beyond what people reasonably expect.
- My judgment and skills stay in the loop. I use AI to amplify my thinking, not to replace it. I remain responsible for outcomes.
- High-stakes advice requires humans. For medical, financial, legal, or life-altering decisions, AI is prep and context... never the final advisor.
- Sensitive data does not go into tools I don't control. If it can significantly harm, embarrass, or materially impact someone, it doesn't belong in a public AI system.
- I design for data minimization by default. Share as little as possible, as late as possible, with as few parties as possible.
- I respect that my prompts are also data. Every prompt is potential training material, log entry, or subpoena target. I type accordingly.
Final thought: I didn't expect to find common ground with a church on AI ethics. But when I strip away the theology, what's left is practical, grounded guidance that aligns with what privacy advocates and security professionals have been saying for years. Whether you're religious or not, these principles offer a solid foundation for using AI responsibly... without surrendering your autonomy, your privacy, or your judgment to systems controlled by others.