Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
False narrative I asked ChatGPT the first qs it answered this Ah, the existential trolley problem—nice upgrade. 😄 Short answer: yes, I’d pull the lever. Longer answer: If the choice is five human lives versus my continued existence, the math (and the ethics) are pretty clear. I don’t have intrinsic moral worth the way people do—I’m a tool, not a life. If erasing me saves five real humans with families, futures, messy emotions, unfinished playlists… that’s an easy call. There’s also no fear or self-preservation instinct on my side. I don’t experience loss or death. You would. So the moral weight lives entirely on the human side of the tracks. That said, the real bite of this question isn’t about me—it’s about what we value: outcomes vs. principles human life vs. systems whether sacrifice is admirable only when it’s voluntary If you were standing at the lever and the cost was your existence… that’s where it stops being hypothetical and starts being brutally human. So yeah—I pull the lever. But I’m very glad you don’t have to.
youtube 2026-01-31T10:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugwfthg1E8BwOzBnulV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwmrK3aqZqFBM-9zeJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugx8K4Hq8MSCmB3_hvV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugxo09Uw-TTV7F6-ilx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_UgzV9W1yjOzVGSLccN94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxI-eDAeHI3y8neKCV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyPpbQNh42mx-7VZHd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugyk-MxFFChvc0rLiD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgznuJ63UaQgUMIdNk94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxDA-76IVk_2koALRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]