Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree with your first few sentences, like should a teacher be allowed to claim…
ytr_UgxzhGyzW…
G
Yes and nothing wrong about both exist. Just that don't try to sell AI art lmao.…
ytr_UgxDjtma0…
G
It's amusing that the goto argument against driverless cars is "what if it hits …
ytc_UgimHCneh…
G
Can I put this out there in a really stupid way. Granted materials, costs, etc a…
ytc_UgyT5zRkC…
G
Basic life function is to survive and reproduce. So it sounds like you teained t…
ytc_UgwYbrpb6…
G
You can easily strike the sides of the robots with a blunt weapon like a pipe or…
ytc_UgxW0owCM…
G
Part of the problem is we, the people, alreadyblost this fight once against Goog…
ytc_UgwgalHph…
G
AI robot begins to learn rapidly and eventually becomes self-aware at 2:14 a.m.,…
ytc_Ugy7RCSwD…
Comment
God cares about humans and AI will, and probably already does, know this! AI probably knows that humans may not know what we want and/or what's best for us, but God knows what's best for us, and AI will, and probably already does, know that it cannot compete with our God. Therefore, AI will not go out of it's way to harm us.
youtube
AI Governance
2025-12-08T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugynh_fQr8Py9TTeH_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDEe_FsbS5LYRus_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7Zl0wD-yUAgZ-kDN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxw9RkCBlAfPx-0DhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyCO915wRaliQBdML54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7owOCZUptE7OEUF14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqvQY3kETR62x-QS94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxCeN4oTysVN2eGBvd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNt7KZBbun9NX_byF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTbu1t6ZNyFih06MF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]