Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m assuming Alex meant this, but perhaps it was a happy accident: I think the implication that the “belief” of consciousness can be logically inferred by an AI suggests that humanity may also be simply convincing itself of the same (a layer of subtext thereby implying that consciousness is the ability to convince and be convinced). But maybe I’m just making logic assumptions based off past information, like some kind of robot.
youtube AI Moral Status 2024-07-28T04:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzNGlYdnZvX4azzyC94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwuEm_5tqZzijSnMtV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgygCCjC4fzOBowmT5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxB44yr2IRR-IlOhSd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBRL07Sa-5L_HJEiZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx5sSvm46XjxYcRSYt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyeT7urd73ugdmYmMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy6XnlqmvpP6HI_qTd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzC0CGIIPbtj39STZR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwz6NMFZ2oEHOsuO3p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]