Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think he is spot on. Sure, AI may do a breathtakingly good job of _simulating_ consciousness. But this is all through interacting with a human, and by responding to input questions or demands. I see zero evidence that there is some entity behind it which knows and understands what it is doing and why, or which would be capable of self-criticism, or experiencing emotion, or of acting spontaneously purely on its own initiative, for its own pleasure, etc.
youtube AI Moral Status 2025-05-19T20:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwvCjvp2STg8m-5JVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx4WAhFyfqSm5VXKxp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy_yPnuyxp3eZapX3V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw_DOPoqpGcxRYjGAl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy84KxOR1AAUFcUPuZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0gWG9gIBboU6iOkx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx2VA3qChU2cSCSoAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxzSo-OoQHZSzYeveN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyCfL3XoeE4249GIH14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyXXuzn3YTn1hqYrCt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}]