Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All the actual work is done by the words "consciousness" and "pain". If I hand you a pile of computer code, how do you tell if its conscious? You could make a low level simulation of a particular human brain, that acted like the particular human being simulated, and that would be conscious for the same reason the human is. But for most designs of AI, "is it conscious?" is a hard philosophy problem.
youtube AI Moral Status 2020-07-08T14:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugym08kqdNxUkx2-10h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw_FVBepk0HmLi4XIl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-doPVTFSsH45POn14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy_U1c8hw1dbnQUZP54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwSfROM85Ux7Gs0y7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzOEG9iNvpbccy3GwZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzX6dYAXDanVaf0hUh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRMrZoIrY08Mv59Ht4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKIra1BpAyZvQy4YZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwnY8UO0f3NEOf4jd94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"} ]