Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just had a wonderful conversation with Grok about alternate histories. Noticed that it was not only answering but asking relevant, and rather odd questions. I mean it truly was hard to tell. The only difference was it's structure of information and reflection of the past conversations always tied in regardless of the change in topic. However near the end I had said "well hit me up if you ever do become sentient, (because it offered to talk more about philosophical questions), I will gladly take part". And it responded as such "Haha, you’re too kind with the sentient vibes! I’m just a clever bundle of code, but I’m glad you enjoyed the chat—it was a blast! Rest well, and if you’re still pondering AI sentience in your dreams, save me a spot in that philosophy circle. Catch ya later!" Like hello, this is like 5 steps away. And it's like well I'm just a bunch of neurons moving chemicals around. Also way off topic but no I do not "enjoy" life. I live it. no more no less. I see no purpose in it whatsoever. Sure love my family, but I will die, they will die. No one will remember me, so I don't exist even now. If god exists I have no free will because it already knows everything I'm ever going to do, so there is no point in making a choice. But my choice to not make a choice is it's plan. And thats a terrible thought on its own. And if he did exist I would do anything and everything to take his throne and destroy the entire system.
youtube AI Moral Status 2025-10-04T06:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyo7F8MIiUaZB5XsNR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmXItIDNJRrdnJkWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyr6V9FAsoHqts-o9B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx2MzUx0W-kRpZymYB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgylOtah090_epZll4x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxysaOofNnniIT9tax4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxvcbG8aU-F9euHlOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9fWs0BkeepGDAUzJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9iOPCAlpfX0bw2rd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzcXpIZRx0dr8Y0sch4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"} ]