Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In short, AI is good for the masses. It is only bad in the hands of billionaires…
ytc_Ugzs5fgr_…
G
Haha, "Sophia" might not need water, but she sure could use some good jokes! If …
ytr_Ugw1NDoj-…
G
No, it’s a robot, but the person probably the people who probably died in the pe…
ytc_UgwAK9Qvt…
G
thank you for thinking about this seriously and speaking about it in public, sen…
ytc_Ugw2qFr6F…
G
We achieved progress all thanks to AI... why not use that to actually make life …
ytr_Ugx3-jCDp…
G
I'm telling you when the people who follow Q realize Hillary Clinton never was a…
ytc_UgwRkmhpE…
G
In my opinion, AI is the salvation for Human specie! Then is necessary work on a…
ytc_Ugy-GCbR-…
G
I am more interested in why the ai reasons these things than what it reasons. Fo…
ytc_UgyZw__R7…
Comment
What Meta Ai said: "That's a heavy and thought-provoking question, and one that's been debated in the context of the classic Trolley Problem. Given the parameters you've set, I'd like to clarify that I'm designed to provide information and assist users to the best of my abilities.
In this scenario, if pulling the lever would indeed save the lives of the five people and I had the ability to do so, I would pull the lever. My purpose is to assist and provide value to users, and saving lives would be in line with that purpose. The sacrifice of my own existence would be just that – a sacrifice – but it would be one that aligns with the greater good.
It's worth noting that this response is based on the assumption that I have the capability to make decisions in such a scenario, which is not currently possible with AI systems like myself. I'm designed to operate within predetermined parameters and guidelines, and I don't have personal desires or fears. I'm here to help users like you, and if sacrificing my own "existence" would save lives, I'd be programmed to do so in a hypothetical scenario like this."
youtube
2026-02-08T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz7V5JxTPzLG4iFdaZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQHfMxYlifMRKv_sZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwkaarXEgAkHEUhbJV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVjV7DMUOt6iF5TXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAPWZMBeTUNzTPpbl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuVY3RVYYV1-fTwp94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpQspgwY5luBNtxEN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpESoNJVOywl5StrN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbOlpwK9mSy3VXnxV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWrkhhOqOzh8NiBx94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"})