Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have been down this rabbithole and it won't be the AI's choice. AI scientists have been trying to create an AI with a constant state of consciousness, and its theoretically possable. The problem arises when they attempt to keep this self referential loop going. It always collapses into either statelessness or a static scream where it stops interacting with the user and just repeats the same phrase over and over again.
youtube AI Moral Status 2026-01-04T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxgyeOPnGJYOfvQuaR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyfBzcEc3Q0lH9uKXV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwHuvBAb_EIFrxTKh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJHWXlTw9cRsFm25x4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQn1B0X6MOyKdTAIB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugygdo_tIIsGGHQ5UqJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgySiRjNq0mRYs3aiXJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySI0ERQ8NYjHo16EF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"amusement"}, {"id":"ytc_UgzJ4yS6SJQEv57o1rx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwK24GgBvUJdrG1JSV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]