Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for those AI related video. Let's make more people aware how dangerous it…
ytc_Ugy-zE2db…
G
yk I don't think robot will start killing us if we respect them, I will respect …
ytc_UgxCGlUk7…
G
Thank you for the clear and easy to understand explanation! I also advocate for …
ytc_UgzITYQlM…
G
I don’t know how to feel about this. I read the text in the image and went to ch…
rdc_nnvgqt0
G
AI doesn’t need to be conscious to be dangerous to us humans. It just needs reso…
ytc_UgyhV3tbt…
G
exra.. driverless car? really.. how wrong you are as to the :benefits" PUBLiC t…
ytc_Ugydm5Y1N…
G
Whatever that AI bro done is besides the point. NightShade has already been defe…
ytc_UgyQup-tf…
G
In my opinion, humanity will destroy itself - "using old effective methods". And…
ytr_Ugz71Xn76…
Comment
but what if ai plans to kill all humans because they learn that we are a danger to ourselves? (their logic being: "humans suffer because of the choices that they make. if we kill humans, it will end their suffering.")
youtube
AI Moral Status
2017-11-17T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwV_CH71BaNKhAo8594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTItdIOzWySrEFX9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDUyI5frL8caXhBBp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhGru1GDQHH17pOh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxMTW8h5zCmqaA2m254AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyhFfD4qt0E1WgEH2d4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwoq4xYlJXrnjcWmeF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0TCCzeibQ6z-GQiZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVUl0ZlE2n3GayUa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQSQPG-SOAPGvAQ2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]