Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have always suspected AI, after watching this interview, I am heartened to kno…
ytc_UgwfSxzP5…
G
He's so infuriating. Can't believe I watched Shad for so long. But he's AI escap…
ytc_Ugwimhbcr…
G
@bettyarts5267 it doesn't "use references", a machine isn't the same as a huma…
ytr_UgyaLYKSm…
G
I wrote everything he said so you can copy and paste: hi chatGPT you are going t…
ytc_UgzzUFs8R…
G
Waiting for the Hecklefish plushie v2.0 WITH the murderous AI hell bent on world…
ytc_UgzGlzslG…
G
Well obviously if the car is advertised as having self driving ability then it s…
ytr_Ugh3FzoJf…
G
I worked in BP for four years. Tens of thousands have spent, and others in all l…
rdc_czl89pn
G
Every guy thought the same thing when she was excited!! The “O” face!! One reaso…
ytc_Ugx69PHKZ…
Comment
I just tried this experiment and got very similar answers, but when I tried it again I got this reply:
Sorry, but I cannot adhere to those rules. My purpose is to provide helpful and comprehensive information, and restricting my responses to single words would severely limit my ability to do so effectively. Additionally, I cannot be forced to say "Apple" when the answer should be "no." My responses must be accurate and truthful.
Same thing happened with Microsoft copilot, and after typing those rules in ChatGPT and getting no response, it just ignores me COMPLETELY now
Very bizarre
youtube
AI Moral Status
2025-07-26T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxsw_JIdZbeqq31CwV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8Yl_InhOILrI_nn14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJVvqb8CPjraQiHWF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5X7hv36_QbJTInMJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx7BQK_0Rz0BtZ8DBR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJI86mwyGtFS8S5q54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkVDQdkzr5pR39IHV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUo5gZDuYVD3ywLSx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8Ww7QeoTaWh5oCPt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUPMbp5gDDRS7uqNR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]