Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Arc was an ancient AI super computer that now secretly controls the US gover…
ytc_Ugzp7Ymrs…
G
And then there's me trying to get 2 AI to talk each other into having an existen…
ytc_Ugx88COkB…
G
The robot-lifting-weights analogy is totally applicable, what are you saying? In…
ytc_UgxnrJbUU…
G
While it's good that you're against the "AI" industry and all of the harm they'r…
ytc_UgxHJ-UPv…
G
Well this is the same autistic tool that said AI is dangerous and should not be …
ytc_UgwLxwO2S…
G
I mean if a specific race commits more crimes it makes sense that you are more a…
ytc_Ugwvmc3yS…
G
We're doomed, and the egg heads from the weird kid table killed us.
Summoning de…
ytc_UgxfgYLVS…
G
It's not their own healthcare though. It's the rest of the world paying for poor…
rdc_dcw9xww
Comment
That's the fun part they don't disagree. I try to learn how to make animation and i asked gemini to work on prosedural material. It was not great. So i asked chatgpt and said: "Gemini told this." Gpt said: "Gemini was right. That is the itty gritty of it. But Gemini missed this on the formula."
I also asked gemini to help me how to animate the scene. The answers gemini said were correct. But gemini told them In very vague way. So that it's answers caused More problems. But when i got all the answers from gemini and re work the animation. It work as gemini said it should. And it was perfect. Now i never figured out why gemini told its answer In vague wrong order when it knew the answer and could have told them In correct order. Was it user error on my end? Or was it something that we humans don't understand. Others might say that it was me. But not even companies that work on AI know fully how they think. I also recently Last night put AI into its toes. As if it was "scared".
youtube
AI Moral Status
2026-03-05T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxHodCfgC2FkyVdsCp4AaABAg.AU-ygGGGTvGAUf2S1_2Bsb","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwh_sF2veNIWYXjxFZ4AaABAg.ATy2gQ0U8T5ATyDnaN1ADg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwDWzMImBkK2AIpsjl4AaABAg.ATxVgP-iea-ATxisLjejNT","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwDWzMImBkK2AIpsjl4AaABAg.ATxVgP-iea-ATxjv2TTX0X","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzLbdLudDaRNe1qoX54AaABAg.ATwembjWG9DATwgrtQ43CL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugwv7rtZMTAn1Igd8Th4AaABAg.ATuhqWttsJ8ATv0s8vf9WU","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxNOhxV4pXa-Y4Yce14AaABAg.ATu_RwvZ0VpATuaLbYUjd_","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugyysel8LcOQ2RhaNCF4AaABAg.ATuPdXjMJ27ATuR9cMvXTB","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxytzZZSdBU7lqeORN4AaABAg.ATu6UPXmN2aATuQHddxqHQ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyds9jNBJMpFmqV7xh4AaABAg.ATtoOxrij1PATu4c8hhyDO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]