SCANT: A (kind-of-decent) Framework for Ethical Deepfake Creation & Distribution
Contents
- The Ethical Blueprint: Building Trust in Synthetic Media
- S - Social Benefit
- C - Consent
- A - Accountability
- N - Non-Deception
- T - Transparency
- Putting SCANT into Practice
- TL;DR Checklist
- It takes work!
- AI - Embracing the Human
- Speaking of ISO 42001
The Ethical Blueprint: Building Trust in Synthetic Media
Lots of damage has been done with AI, and to keep from deep-sixing the forward-leaning tone I want in this article, I’ll refrain from noting any details – the internet is available for you to search to your heart’s content. I want to start with that note because how we use AI is not just an option, like whether we want a cinnamon roll or a bagel at breakfast. AI use has meaning – whether it’s dark or not depends on each of us.
On the lighter side of negative consequences, more and more media influencers are posting AI videos claiming that those videos aren’t AI, just to increase their views and to increase interaction – drawing out those who claim, rightly, that the media truly is AI, and then arguing back and forth about its validity. Some influencers are abusing AI to waste peoples’ time for the sole benefit of the influencer; and those actions a) further erode viewers’ trust in media platforms and b) turn them against the hope of the real usefulness of AI. (plus, it wastes their time, and that time is part of life, so it really grinds on the nerves to realize that one has spent some of their life’s breath only to be been taken for a fool). So, even on the lighter side, those consequences are eroding trust in all-things-online. But with running with the concept of “learn to discern,” one can beat that fraud. Mitch Clark does an excellent job at educating on this.
For a recent cybersecurity presentation, I worked with the bots to create an ethical AI framework to fit GenAI in general. The bots gave me some decent ideas, and I prompted back and forth with them, and then moved it into my own short set that I then made into an acronym. Mnemonic devices to the rescue! This is also an example of human-in-the-loop in AI – the robot gives some information, and there’s back-and-forth between me and the machines, but in the end it’s human creativity and alignment of the information presented that wins the day.
Are there other frameworks? Sure! Well-researched ones, official governmental ones, professional community-developed ones. But I wanted to make one that’s maybe more accessible to the general public. Will it fail to gain traction? Certainly! But I have today, and maybe someone will read this and either learn something, or think “I can do better” and I will have accomplished a goal of forcing others to write better things because I wrote a so-so thing. (I do this at home – I throw out weird, and even bad ideas, and that forces the kids to create better ideas 😊 )
SCANT is a concise and actionable set of principles designed to keep AI-generated media safe, respectful, and trustworthy.
Why SCANT? Scant means “falling short of what is normal, necessary, or desirable.” I decided to keep it because it’s an insufficient approach, but may be simple enough to either work as-is or urge others on to make their own. AI is still a nascent field, and a good improvement would be for those involved in the field to either adopt or form their own workable models for evaluating how AI is developed in their org, even if it’s nothing fancy.
Yes, of course there’s ISO 42001 ! There are great things happening around the world. Those can be expensive and cumbersome, though just knowing the principles and proceeding accordingly is a great way to show others how you align your AI practices with the international standard. SCANT is simply a “pet project,” if you will, so I thought I’d bring it in the open.
NOTE: The term “deepfake” is often used to describe unethical use of high-quality GenAI, but the term is actually used for any of those high quality results. Throughout this article, deepfake is used for the final product, not simply for the deceptive kind.
SCANT
S - Social Benefit
The Goal? Deploy deepfake technology only when it creates a positive impact for individuals, communities, or society at large.
Why it matters | How to achieve it | Examples |
|---|---|---|
Avoids harm | Purpose‑first assessment
Stakeholder consultation
| • Using a deepfake to recreate a historic figure for a museum exhibit that teaches history. |
Promotes public good | Tie to measurable outcomes
Iterative review
| • A deepfake‑based language‑learning app that improves pronunciation for non‑native speakers. |
C - Consent
Goal: Secure explicit and informed permission from every person whose likeness, voice, or mannerisms are used.
(Are voice trademarks the path forward? See this article re: Matthew McConaughey trademarking his voice https://analystip.com/matthew-mcconaughey-trademark-himself-to-stop-ai-clones/ )
| Core elements | Practical steps | Case handling |
|---|---|---|
| Informed Explain what the synthetic media will depict, where it will appear, and how long it will remain online. |
• Provide a plain‑language consent form that includes: ◦ Description of the generated content ◦ Intended distribution channels ◦ Rights to withdraw consent later |
• If a celebrity's image is required for a parody, obtain a signed release from the talent agency or the individual's legal representative |
| Freely given No coercion, undue pressure, or hidden incentives. |
• Allow the subject to decline without penalty. |• Record consent separately from any unrelated agreements (e.g., employment contracts). |
• For archival footage where the original subject is deceased, seek permission from next‑of‑kin or estate holders |
| Specific & revocable Consent must be tied to a particular use‑case and can be withdrawn at any time. |
• Store consent metadata (timestamp, version, scope) alongside the generated asset. • Implement a "right to be forgotten" workflow that can purge or replace the synthetic media on demand. |
• If a participant later objects to a political satire video, promptly remove the clip from all platforms and replace it with a disclaimer or a non‑synthetic alternative. |
| Verification Authenticate the signer to prevent forged releases. |
• Use digital signatures or two‑factor verification. | • For minors, obtain parental/guardian consent and retain proof of age. |
A - Accountability
Goal: Ensure that creators, distributors, and platform operators can be identified, held responsible, and answerable for the consequences of synthetic media.
| Accountability axis | Recommended mechanisms | Example actions |
|---|---|---|
| Attribution Embed immutable provenance data with every generated file. |
• Metadata tags (creator ID, model version, date, purpose). • Cryptographic hash signed by the creator's private key. |
A journalist publishing a deepfake interview includes a signed JSON‑LD block that records the AI model used and the editorial intent. |
| Governance Adopt internal policies that define permissible uses, escalation paths, and sanctions. |
• Create a Deepfake Ethics Board (legal, technical, PR, ethicists). • Present a Responsible‑AI Charter or AI AUP that employees must acknowledge. |
If a marketing team attempts to launch synthetic endorsement without clearance, the board halts the campaign and imposes remedial training. |
| Liability Clarify legal responsibilities in contracts and terms of service. |
• Include indemnification clauses for misuse by downstream parties. • Specify penalties for violating consent or transparency rules. |
A SaaS provider that hosts user‑generated deepfakes must delete infringing content within 48 hours of a valid takedown request. |
| Auditability Enable independent verification of compliance. |
• Maintain tamper‑evident logs of generation parameters, consent receipts, and distribution events. • Allow third‑party auditors to review logs on a scheduled basis. |
An external regulator audits the logs of a political‑campaign deepfake library and confirms all videos carry proper consent documentation. |
| Remediation Have clear processes for addressing harms after release. |
• Rapid‑response team to issue corrections, removals, or apologies. • Compensation framework for victims of defamation or emotional distress. |
After a deepfake prank causes reputational damage, the creator posts a public correction, removes the video, and offers a settlement to the affected party. |
N - Non‑Deception
Goal: Prevent the intentional misleading of viewers; synthetic media should be used to educate, entertain, or augment reality - not to fabricate false narratives.
| Principle | Implementation tactics | Real‑world illustration |
|---|---|---|
| Intent clarity The primary purpose must be obvious (art, satire, education). |
• Use visual cues (watermarks, borders) that signal synthetic origin. • Pair the video with an introductory caption ("This is a simulated reconstruction"). |
A documentary about a historical battle includes a deepfake reenactment labeled "Recreated using AI". |
| Avoid covert manipulation Do not splice authentic footage with synthetic parts without disclosure. |
• Run a content‑integrity check that flags any mixing of real and generated frames. • Require a dual‑review (technical + editorial) before publishing. |
A news outlet refuses to air a clip that merges a politician's real speech with AI‑generated statements. |
| Respect contextual truth Synthetic media must not alter the factual context of the original subject. |
• Preserve metadata indicating the original source material and any modifications applied. • Disallow deepfakes that change a person's expressed opinions on contentious issues. |
A deepfake video of a pastor urges donations to a fake need, and the donors send lots of money to the criminal. |
| Educate audiences Promote media‑literacy so viewers can recognize synthetic content. |
• Provide educational resources (guides, tutorials) alongside the media. • Partner with fact‑checking organizations to flag deceptive uses. |
A streaming platform offers a "How to Spot AI‑Generated Videos" mini‑course linked from every deepfake title page. |
T - Transparency
Goal: Make it unmistakably clear that a piece of media is synthetically generated; and disclose the technical provenance behind it.
| Transparency facet | Recommended practice | Sample wording |
|---|---|---|
| Synthetic label (e.g., SynthID: https://deepmind.google/models/synthid/ ) |
Add a persistent, machine‑readable marker (e.g., EXIF tag, embedded JSON‑LD) stating "synthetic media". | <meta property="og:type" content="synthetic_video"> |
| Model disclosure | Publish the exact AI model name, version, and training dataset characteristics. | "Generated with Tabled Confusion v2.1, trained on LAION‑5B (publicly licensed images)." |
| Generation parameters (e.g., https://docs.cloud.google.com/vertex-ai/generative-ai/docs/multimodal/content-generation-parameters) | Record seed values, prompts, post‑processing steps, and any human‑in‑the‑loop edits. | "Seed: 123456789; Prompt: 'Ambassador delivering a speech on geopolitics; Upscaled 4× with ESRGAN." |
| Human oversight | State whether a human curated, edited, or approved the output. | "Edited by senior editor for factual consistency." |
| Availability of provenance | Host a public ledger or API where anyone can query the metadata for a given asset ID. | "Lookup ID abc‑def‑123 at https://synthetic‑registry.example.com/abc-def-123." |
| Clear visual cue | Apply a subtle watermark or overlay that reads "AI‑Generated" without obscuring the content. | A semi‑transparent banner across the bottom‑right corner. |
Putting SCANT into Practice - a mini-workflow
- Idea Generation
- Conduct a Social Benefit questionnaire
- Draft a Purpose Statement and identify target audiences
- Consent Acquisition
- Send a consent package (explanation + digital signature)
- Store signed consent with timestamped metadata
- Model Selection & Documentation
- Choose an AI model, note version, training data, and any fine‑tuning.
- Log all generation parameters in a tamper‑evident ledger
- Creation & Attribution
- Generate the deepfake, embed provenance metadata, and apply a visual Transparency watermark
- Record the creator’s identity and any human post‑processing
- Review & Accountability Check
- Submit the asset to the internal Ethics Board for Non‑Deception and Social Benefit validation
- Verify that Consent and Transparency requirements are satisfied
- Publication
- Release the content on chosen platforms with clear labeling (e.g., “Synthetic Media – For Educational Purposes”)
- Provide a public link to the provenance record
- Post‑Release Monitoring
- Track audience reactions, complaints, or misuse reports
- If a violation is detected, invoke the Accountability remediation plan (removal, apology, compensation)
TL;DR Checklist
- Social Benefit: Purpose‑first, impact‑focused, measurable outcomes.
- Consent: Informed, freely given, specific, revocable, verified.
- Accountability: Provenance metadata, governance board, liability clauses, audit logs, remediation pathways.
- Non‑Deception: Intent clarity, no covert splicing, preserve factual context, educate viewers.
- Transparency: Persistent synthetic label, model/version disclosure, generation parameters, human‑oversight note, public provenance API, visual watermark.
Embedding these SCANT principles into every stage of the GenAI lifecycle - from conception to post‑release monitoring - creates a defensible (yes - you'll need to defend it to customers and/or auditors) and ethically sound workflow that respects individuals, protects public discourse, and unlocks the creative and societal potential of synthetic media.
How does the typical person use this? It’s really easy, actually (at least, it should be). Don’t be a jerk, don’t steal, don’t defraud, don’t mislead. Those bad things have been going on for a long time. Anyone on any social media has seen or taken part in news by memes, spreading mis/dis/mal information simply by passing on something revealing or shocking based only on a well-crafted meme, which probably has authority based on who shared it, but itself has no link, source, time/date stamp, or anything that remotely resembles a way to verify. Don't be part of that "game" - help out viewers and readers.
Humor, comedy, and satire are part of freedom of speech, so it's vital that we remain ethical in synthetic media while NOT stamping out these freedoms.
It takes work!
You may notice that this is a lot of work. It should be. The capabilities of AI can’t be taken lightly. Lives and reputations have been harmed because of the focus on Speed over Stability. While much of the software world has been driven by, “Hey, let’s create something simple and then build on it after we get feedback,” so much of AI has been, “Let’s throw this enormous thing out that, see what happens, and then pare back from there.”
Whatever approach is taken, the work needs to be put in to make AI a proper tool for everyone who wants to use it.
AI - Embracing the Human
AI is powerful. Proceed cautiously. Any reputable AI offering will use cautionary words similar to, “Do not rely on AI’s results for critical business or personal decisions.” Everyone knows that AI can be wrong…or do they? (think of the movie, "I, Robot")
AI does not have intellect or will – it has no soul. It’s complicated, hi-tech, seemingly magical and mystical, but behind it all is a robot, a bunch of machines.
Harvard has an AI for Human Flourishing program, so that’s a good way to get insight into in-depth studies already performed on the potential full useful aim of AI.
An AI benchmark in conjunction with this program is the Flourishing AI Benchmark (FAI Benchmark). The benchmark is described as follows:
“…a novel benchmarking approach that evaluates LLMs across seven key dimensions of human flourishing, based on the flourishing measure developed by researchers at the Human Flourishing Program at Harvard and in collaboration with Barna and Gloo:
1. Character and Virtue (Character)
2. Close Social Relationships (Relationships)
3. Happiness and Life Satisfaction (Happiness)
4. Meaning and Purpose (Meaning)
5. Mental and Physical Health (Health)
6. Financial and Material Stability (Finances)
7. Faith and Spirituality (Faith)”
A name connected to the study is Pat Gelsinger, a name you may recognize from his times as CEO of VMWare and Intel. More information here: https://techcrunch.com/2025/07/10/former-intel-ceo-launches-a-benchmark-to-measure-ai-alignment/
From their report: “Initial testing of 28 leading language models reveals that while some models approach holistic alignment (with the highest-scoring models achieving 72/100), none are acceptably aligned across all dimensions, particularly in Faith and Spirituality, Character and Virtue, and Meaning and Purpose.”
Don’t expect AI to speak realistically to matters pertaining to the above deficient areas. Regrettably, many have been severely and negatively affected by expecting their LLM(s) to provide precisely these kinds of ethical and life guidance.
In general, current models have the goal of helping people while not causing harm, but some important areas have been overlooked. With such powerful implications, while AI has not considered people holistically, it could be developed to take the whole person into account.
Speaking of ISO 42001
ISO 42001 is considered THE international standard and governance framework for AI. To align with ISO 42001 principles, here’s a high-level plausible mapping (focus on plausible - nothing official here).
| SCANT element | Core idea | Likely ISO 42001 area (general description) | How the SCANT content supports that ISO requirement |
|---|---|---|---|
| S – Social Benefit | Purpose‑first assessment, stakeholder consultation, measurable outcomes, iterative review | Purpose & Context definition ISO 42001 calls for a clear articulation of the AI system’s intended purpose and its alignment with societal goals. Risk & Impact Assessment The standard requires systematic evaluation of potential benefits and harms. |
The “purpose‑first assessment” and KPI definition give a concrete method for documenting purpose and measuring social impact, satisfying the purpose‑definition and impact‑assessment clauses. |
| C – Consent | Informed, freely given, specific & revocable, verification of signer | Human‑Centred Design / Data Governance ISO 42001 stresses obtaining lawful, informed consent for personal data used by AI, and ensuring that consent can be withdrawn. |
The detailed consent workflow (plain‑language forms, right‑to‑be‑forgotten process, digital signatures) aligns with the standard’s expectations for lawful data handling and respect for individual autonomy. |
| A – Accountability | Attribution metadata, governance structures (ethics board, responsible‑AI charter), liability clauses, auditability, remediation processes | Governance & Accountability The ISO requires defined roles, responsibilities, and mechanisms for traceability (e.g., immutable provenance metadata) and for handling non‑compliance. Audit & Oversight – Regular independent audits are prescribed. |
Embedding cryptographic hashes, establishing a Deepfake Ethics Board, and defining indemnification and remediation steps directly address traceability, governance, and corrective‑action requirements. |
| N – Non‑Deception | Intent clarity, avoidance of covert manipulation, contextual truth preservation, media‑literacy education | Ethical Principles
|
Watermarks, dual‑review processes, and audience‑education initiatives satisfy the “prevent deception” and “ensure user awareness” aspects of the ISO. |
| T – Transparency | Synthetic label (machine‑readable), model disclosure, generation parameters, human‑oversight statement, public provenance ledger, visual cues | Transparency & Explainability ISO 42001 mandates that AI systems expose sufficient technical information (model version, data provenance, generation parameters) and that users receive understandable disclosures. |
The checklist of metadata tags, model and parameter disclosure, and a public provenance API maps directly onto ISO’s transparency‑documentation requirements. |