Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Advocate here. I will guarantee that no Ai can understand and turn a situation/i…
ytc_UgzJynz5X…
G
I am against thr development of ai for two reasons. Number one is being able to …
ytc_UgwrAwojg…
G
I dislike AI art and much about AI in general, yet some people might call me a h…
ytc_Ugx1sqb3K…
G
😂😂Damm.. all this appeasement talk with no real though for the bigger picture, h…
ytc_UgzfzFtpE…
G
It sounds like you’re bringing up a pretty intense scenario! In the video, Sophi…
ytr_UgwNfOxdo…
G
I am not that good at art myself, especially when I am doing a comic(I also have…
ytc_Ugwh6cZTo…
G
AI doesn’t need to be “evil” to destroy humanity, all it needs is to be indiffer…
ytc_UgyR9uD58…
G
Just cause I use a calculator doesn’t mean I’m a math wiz. To the same degree, p…
ytc_UgzMcyn_C…
Comment
Honestly, I don't think this is as dire as this video says. If you haven't noticed, us humans are control freaks. We love being in control. If AI is getting out of control, I think there will be a human response to rein it in. We have many avenues to take to make sure humans will be protected.
youtube
AI Moral Status
2025-04-30T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxaybZHAcmO4hguwut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrZcjgEpFuIfuDD6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9D8zcdyj6UwwIjRV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpBD81GgBd6L_Vkl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVFAhafhr5V9Hfwjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxg_34IO9HqO7m-Oud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwb3xPCjd_JUXQDhHV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwoBrhxMqjslharg9x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxIkBSXre4P6myThsd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyC08jsEBmlQinRsgh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]