Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Understanding the why things are true requires consciousness, and consciousness allows one to break the rules. A.I. is never going to be conscious. It might get to a point where the practical application won’t be much different than if it was conscious, but it will never have a true self or a sense of why something is true.
youtube AI Moral Status 2025-07-13T03:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyyJDDL1lwcD8u6Ko54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwg4iHfad5Uz2bF5Ul4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6aCX8_1tPgizcIRx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-aYtWOED2TT-RRWF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxu9ZLju2u5QIlVWsB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGeei1MVWnkt4ctxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLunR9OjtiVMgdw5x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-wmFmT6bi-1rRQZh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwpNfHGq7hUgowcwe94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgykUS01JOPXq3bUia54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]