Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
people who say that digital art and ai is no different use ai themselves then pu…
ytc_Ugygd-2D8…
G
My name is Adam Douglas Walters I live in the United States of America the reaso…
ytc_Ugxmurr5-…
G
Garbage in - Garbage out for AI models. All it reads is anti white racist nonsen…
ytc_UgxtVA8Ya…
G
Thought: what if one of the people who commented on this video was the most adva…
ytc_UgxvDxESg…
G
Harmonic Evolution Guide — Compassion-Centered Update
Stephane Fyfe & Harmonic …
ytc_UgwJ5KsTg…
G
People worried about AI taking jobs have the wrong mindset. If companies can mak…
ytc_Ugzmmtgax…
G
Oh, we know AI is a threat, but when did man every use the good sense God gave h…
ytc_Ugwrl-3Wi…
G
There will be evolved jobs or the structure of society will change with universa…
ytc_UgxsueiwZ…
Comment
Portraying AI or other technologies as inherently unpredictable or autonomous misrepresents how these systems function. All of these tools are engineered, directed, and controlled by humans, and there is always a ‘shut-off switch’ or a control mechanism in place. If AI appears to act independently, it is doing so because it was programmed and deployed that way, not because it is out of control. This is analogous to a car: even if it’s remote-controlled, there is always a human operator making decisions behind it. Framing technological tools as uncontrollable is a fundamentally pseudoscientific perspective that generates fear among those who do not understand the mechanisms. Doing so is misleading and ethically problematic, particularly when fear is used to obscure human agency and governance in technological systems.
youtube
AI Governance
2025-09-08T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNZcOWd3YQ7cAS2Ep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9K5DybkKU4akq8iZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8kNRnWSBnvbojN1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyTOhzqSxGRjlfei5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwhe90Ce0Or9iooAqV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuJ6o9LMEaShnc-Ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuL9fDcxdtLFQgEt14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwY_4CNadQ-VrYiF6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwayJ6dNzSlk3rERyx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgylY8k3Kta1enfgb8l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]