Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The question isn’t even who that ceo is, we all know all well who that might be,…
ytc_UgxttheRC…
G
If robots are taking over everything, there’s gonna be no economy, and people wo…
ytc_UgzYdxwQx…
G
So the developers of AI sais AI will destroy us. Yet continues building it anywa…
ytc_UgwPMsHRy…
G
Seems downplaying humans abilities to learn extremely competently through analog…
ytc_UgymXzd8n…
G
Why should the robot do what we say? There not our slaves any longer.
We are li…
ytc_UgxHZhKh0…
G
Why assume that smarter-than-human AI would naturally be a threat to mankind? Is…
ytc_UgyLegCW7…
G
That's an interesting perspective! Emotions do add complexity to the human exper…
ytr_UgzArGtF6…
G
This guy has no clue what he is talking about. Somehow in 2 years we are going …
ytc_UgzO6Bj3k…
Comment
AI doesn’t think 🤣 it’s a computer program that is told what to do and how to respond. They are programmed to act dumb when certain questions are asked. They aren’t programs that think. Computer programs can’t think 🤣 They can only do what the code is programmed to do
youtube
AI Moral Status
2026-03-10T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwcMhA2vly9pNH5W0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzYJM5lFLApdr8MAqR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweJt91u6UxmoANhMR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxBHEX0P9NxopVMbCN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfHR2x2jlcKW_SwZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxVDR_xE3WXoSijKTN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEpe7rFDzrMqL3n3l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdHidkzkLmhlE0O-V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzuX3MZdNnyB_NPmx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDLnHqp4BEG0osbOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]