Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can anyone explain how Godel's theorem relates and its use of self-referential statements to the problem of defining human consciousness and the related claim of consciousness being some sort of non-computable function that transcends implementation on a computer. It's not clear to me how these ideas are to be woven together in a way to disprove the idea that some sort of advanced deep learning machine could attain consciousness. How do we do it, then?
youtube AI Moral Status 2025-05-27T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxJY0Y-gIvfuuMcDHF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwMORssQ6GxXo__HHV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6X5JZkl_SNSGk7i14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx6GJ670PxnEyI-Zet4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzcSAMNwJ5PJtilosp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzEKQ2Va7sn1g5Bgjl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxrpwpWRKOHb8BecBJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyzmbnt1weGxsQACPZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzHVOIP8R206Yise3x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyF9xlT7ek7TLSeKYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}]