Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"They" have to tell you the truth, but mixed in with even more lies! Lesser magi…
ytc_UgyUJxsa8…
G
So the problem is that we don't know what exactly is going on in there. Guess wh…
ytc_Ugx2LZm1L…
G
I feel like big companies are using the threat/fear of AI to make people work ha…
rdc_kyzbufe
G
Not about the video, but about your sponsor: I feel like war is the place I woul…
ytc_UgyUsy0gH…
G
Much like Elon had his Ai robots controlled by humans. UAE is the new Switzerlan…
ytc_UgyOsIKsb…
G
I feel that AI is good for certain tasks, like low-mid level tech support maybe,…
ytc_UgyLE2X5o…
G
Our world is split to 2, one that defends AI art , and actual artists that PUT E…
ytc_UgzHsAd5n…
G
AI can digitally remaster your VOICE and IMAGE. Sooo I dont even know if this Ne…
ytc_Ugx_1lyQ4…
Comment
LLMs also give wrong answers due to post-training. In post-training, humans provide neural networks with a set of questions in which an answer is always available. As a result, LLMs are not exposed to null responses in the data. Once human trainers begin presenting LLMs with questions where the correct answer is “I don’t know,” the models start responding with “I don’t know.”
youtube
AI Moral Status
2026-03-01T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz8K7gIffnKEMKSnNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyHhli5R6UqJ0qsfTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyL797_M71m5hQW-PN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyillgr3oYJn_d_FnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5Juih4UDG8Yij1MN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzUqHajhQLOQu10Pr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwjLJk5tZcfPpq5q7N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9S4Kpf-J-OMVdrWd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy9avnzUN7G8NPX67t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8zuQBCFBUGuXyjcJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]