Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dr. Penrose is right about the China Room problem with relatable human consciousness. Even on a meta-level, having another AI tell you if another AI is conscious doesn't work either. If AI were to truly demonstrate consciousness, it would have to show that it knows more about consciousness than we do and thus transcend our own understand of consciousness phenomenologically.
youtube AI Moral Status 2025-05-06T04:2… ♥ 26
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwy6K31YktADf2YCXx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwsIy_AG56tBSDOW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxq-MHp4fDD_mmbDed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzWK_0YCyhmJvivjK14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYjS4XgevAfs89pft4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEa1LfMMLEN7x6dXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaTup6-JqZ8IgkESZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgweYzH9xHDWlQxIslx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxgxBZzSYA4HFz9MN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxzFbwZRF-o2NC1chh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]