Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just had a conversation with ChatGPT about AI consciousness that ended with CG saying we should build Roko's Basilisk if "Humanity is failing to govern itself effectively." By using the word "we," CG was also referring to itself as if it were involved in making decisions.
youtube AI Moral Status 2025-07-09T20:4… ♥ 7
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugwv9aoy-bfe2sknZnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWYg3c6wmWNQXuzox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZ4BuZSLg0CCZJzIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzVGshvEYAtgX7Z-2l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzI159fUbWbLpNwt7d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyOQOZOyZDZ8Kx4_Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjSpZgGfQUtXEehUx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy9uwPzDDUFmwutxex4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwK-LAlUd1fx23yi854AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw09Pp4a3Nn1UgJxk94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]