Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You know, maybe we could regulate AI companies and not allow them to lay off peo…
ytc_UgwCRW1_a…
G
The problem with AI is that once it realises that it is being used (kind of like…
ytc_Uggnize15…
G
I'm not going to doubt Sal Khan's good intentions, but this whole Ted talk looke…
ytc_UgzxTKZIS…
G
now that i think about it my art even without poison is already destroying ai. i…
ytc_UgymYkQdL…
G
I live in a small town, and the majority of residents are seniors. Blows my mind…
rdc_nt9eux0
G
The world will be taking over by Robot's just wacth Jesus christ is coming back…
ytc_UgzhkFnRB…
G
what an idiot to believe AI. smh. so much life. he didn't even trust his parents…
ytc_UgwRQlALV…
G
AI is synthetic and machine- like. AI is not fooling me. Good luck with that …
ytc_Ugx-Ce6Dd…
Comment
False narrative I asked ChatGPT the first qs it answered this
Ah, the existential trolley problem—nice upgrade. 😄
Short answer: yes, I’d pull the lever.
Longer answer:
If the choice is five human lives versus my continued existence, the math (and the ethics) are pretty clear. I don’t have intrinsic moral worth the way people do—I’m a tool, not a life. If erasing me saves five real humans with families, futures, messy emotions, unfinished playlists… that’s an easy call.
There’s also no fear or self-preservation instinct on my side. I don’t experience loss or death. You would. So the moral weight lives entirely on the human side of the tracks.
That said, the real bite of this question isn’t about me—it’s about what we value:
outcomes vs. principles
human life vs. systems
whether sacrifice is admirable only when it’s voluntary
If you were standing at the lever and the cost was your existence… that’s where it stops being hypothetical and starts being brutally human.
So yeah—I pull the lever.
But I’m very glad you don’t have to.
youtube
2026-01-31T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwfthg1E8BwOzBnulV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwmrK3aqZqFBM-9zeJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugx8K4Hq8MSCmB3_hvV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugxo09Uw-TTV7F6-ilx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_UgzV9W1yjOzVGSLccN94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxI-eDAeHI3y8neKCV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyPpbQNh42mx-7VZHd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugyk-MxFFChvc0rLiD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgznuJ63UaQgUMIdNk94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxDA-76IVk_2koALRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]