Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great video CBS news and this hearing about AI was interesting to watch and enjo…
ytc_Ugxjj2ZB5…
G
I was planning to become a manga illustrator, and if I lose it to an artificial …
ytc_UgwfE3I3c…
G
AI is going to keep getting better. Humans are as good as they're ever going to …
ytc_Ugw3C2saT…
G
My initial reaction to this video is that people don't know the difference betwe…
ytc_UgwqOthTA…
G
For years I've been writing a story I've been passionate about and I want to giv…
ytc_UgyxWcfMc…
G
One question about the Ai future, with all this talk of having robots doing all …
ytc_Ugz1edtxs…
G
Unfortunately this is a common approach these people take in general, not just A…
ytc_UgzZoNMNi…
G
I don't want A.I to take over, but I think I'd rather the future where A.I takes…
ytc_Ugw7-1JwW…
Comment
Well to the "It's very hard to build an AI with XYZ" line of thought: It might end up being difficult even if we can make an AI that values the same things as us to think it is worthwhile to keep us around. Just think about all the evidence we have of us being terrible to each other.
youtube
AI Moral Status
2025-10-31T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZZEpDQ4Fol_rRz3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnhVMdx4H5KG97R914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgoTu7UFS3CUEDwlF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8w9Zsyzc24y2przp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtR4Pt8nUMCs_ZJ3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyC5Gw2e__-OdtBDZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydqfQICatDtEr9AZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrDlVgZczTRreG_al4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXw9i7ZA1Aq7C_Q0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmnECZLmYxsytfsqR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]