Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This has nothing to do with ai. Corporate and stock holder greed is what will le…
ytc_UgyBEcu4Q…
G
I've done many of the same things as Dave, and I'm knocking on retirements' door…
ytc_Ugxz1LgFY…
G
AI has no way to experience human feeling in completeness, no body, nothing. It'…
ytc_UgwJoYmfn…
G
As I have grown to learn more about AI I have found that artists against it are …
ytc_Ugx7wcV-U…
G
The United States is doing it, China is doing it, don't complain just except it,…
ytc_UgwrxwWIa…
G
I em late to the chat 😅, I use it in writeing as beta-beta reader, as critique a…
ytc_UgzOTKDr8…
G
Huh? This is weird. None of this is how LLMs work. They don't think. At all. The…
ytc_UgzsHTOyh…
G
WALL·E
The film seems to warn us about the dangers of an excessive dependence…
ytc_Ugwim4mwf…
Comment
"Geoffrey Hinton outlines the individual dangers of AI, each already deeply unsettling. But the true nightmare lies in their coordinated use—like a military operation: deploying deception, disinformation, strategic phasing, anticipatory countermeasures, and psychological manipulation. In the wrong hands, these tools could be orchestrated into a calculated campaign against humanity, not just to overpower us, but to outthink and outmaneuver our defenses before we even realize we’re under attack."
youtube
AI Governance
2025-06-26T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy1xnYBjEz-UJ5d41d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwbpo07-1ytsmhcLpt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUCfanmspWzOCO90J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxp9sp3gbUnWqJGuLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxy3lXEAH6QJJdlYIV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKls1VsJStxXjLrMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxILtX8bvl-3waxgcl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxtF4SdFiLzdExvNP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvMCakEqTRmqs_UYB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxm6w5kV6KfuAiz_xR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]