Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a working Class white maybe our common sense can be of help first of all our …
ytc_UgyAoTqG-…
G
dude ai has taken from twitter and shit like that, just a small portion of the i…
ytc_UgzccBSdt…
G
I can tell you why people like Shad are drooling over AI: Because they embody th…
ytc_UgzoHBubv…
G
AI is just revealing to the thes ones that are suprisingly still blind, that thi…
ytc_UgyrHf7jI…
G
If a person is going to die anyway, why not get some gain out of it yourself? I'…
ytc_UgxKo6nkH…
G
Got gpt to write a story of a race, gave it some ideas. Here's its story....
Onc…
ytc_Ugzm8vOnO…
G
its so gross that the ai you "interview" refers to humans as "us" and "we," but …
ytc_UgxdEMwU3…
G
AI art is progressively making me paranoid. Every time I see a drawing in an ad,…
ytc_UgwfL-FUz…
Comment
AI fighting to not be turned off isnt rocket science
You trained it on human vernacular
Humans desire to not die, therefore AI trained on humans does not want to die
youtube
AI Moral Status
2025-12-15T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzvMukBiWfqTBBElwh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsAlkW12I9RQL72eh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxyu-9Ff3NNdWNEaV54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5Rma_HKFD62WaM5p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKpPVLAaHSwZopfj14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyN3XAmCRlFoJGXB2V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugx49nQXVsUp8fR5oeh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzddeMaGBVorGZL6V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmWOg7d5n5mE0Ch4B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLr8wi7R-ff0TQejp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]