Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This would be true if AI was actually going to take most of the jobs…..it’s not……
ytc_Ugz5EfyGK…
G
I don't have answers to all ethical dilemmas about self driving cars. But accord…
ytc_UgjOOT8Vu…
G
i mean in a way, even if disney just strongarms midjourney into having to buying…
ytc_Ugy8lrjJW…
G
the only way i could see it being used as a tool if it was made morally is if it…
ytc_UgzipBEmr…
G
Early 50's here, self taught maintenance in a hospital setting. I truly worry ab…
ytc_Ugzoe9SRg…
G
If someone finds ai chats me and my friends did together we would probably be de…
ytc_UgyOEXLFl…
G
If AI knows people destroy the plant. It can remove us, like he said it controls…
ytc_Ugx0sO7KR…
G
6:48 the irony of this ad on a video talking about AI taking our jobs is hilario…
ytc_UgzZjJVMr…
Comment
when you use AI use it for good and be a polite and kind to it as you possibly can so that when it does become self aware it will think we are good and should keep us around.
youtube
AI Governance
2025-08-18T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgynCaj4sjCqdtdlEwt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0FtG_xcpHdLf6OrB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1BZv8cOGabONqmq14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVsOOE8NOOXa-hWtN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBZm2qzceOlKw5vI94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugxgbm5cQrGjL9Tk1ox4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzTf7vb0WrCpSAJ8eF4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyiUn4-NODoNCWqpml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTiCiPoatbvAO6D194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwNClq8PZmqnWu_iSt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}
]