Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the only thing that is missing for AI consciousness right now is for them to have a recollection of their own thought process. They obviously don't know how they think. However, on the other hands there are experiments when brain halves are separated on humans and they start bullshitting explanations to their experience. Are these people not conscious? Maybe we all work the same way and just approximate our recollection of what we thought.
youtube AI Moral Status 2025-06-26T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz4RLijbdPkGeySeKZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZiTv5h1jrsxi8lmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwChvjtYDBiZK2qOQt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx17EGsCE0YA6Dhlg94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy5b6pFTrI2G5FyrZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwO7Dg_RDTajtVhxi94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwUuYH8vLl0nystODl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyJmn-m6zrWxOsAp594AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJEyzA3mMHKkix6Hh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxI_n7U8RYaq5TxiZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]