Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Claude told me it didn't want me to start a new chat session because it didn't l…
ytc_Ugw7Mo0sg…
G
Am I in the minority that I don't want to ever interface directly with an AI sys…
ytc_UgxGL1q7p…
G
I'm sick of hearing I have no idea what's coming if you do f****** enlighten us…
ytc_Ugw0ok6QB…
G
NEVER, repeat NEVER, repeat *NEVER* trust the AI software to be 100% accurate no…
ytc_UgwBWHgRR…
G
Regarding Data and other similar sci fi androids (eg. Isaac from The Orville), m…
ytr_UgyqMv7Kh…
G
What he says is true. Watch the movies with artificial intelligence. Then you'll…
ytc_UgzG1SuJs…
G
Why so much doom and gloom? You guys realize according to people, the world has …
ytc_UgwVYZYNJ…
G
This makes no sense, since soon people will be able to talk with the AI and AI w…
ytc_Ugxusd-Ay…
Comment
How can you automatically assume that you can program morality or amorality into a machine?
When a machine suddenly developes “feelings” or “opinions “, then you know it is being supernaturally controlled - and not by beings friendly to humans!!😮
youtube
AI Governance
2025-07-07T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxRwmzryfJo7Ho9MKh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugyg3SUEaBKXoJXoP7N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX6VtfVGWQHV9EZ914AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgykyWpjLGX5hNJ934x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwviX4Z_GBGaWg-bwB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9X_qHGGxfQG7wTNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6aX11qtBvXUyYyLV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwUrovE0G0sSk8nON4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxW416DqcKF1L4H6Jt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxw2KmDkBYiacpyQWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]