Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we overvalue our particular brand of organic intelligence. As human beings much of our behavior is very mechanistic, habitual, and not super conscious. To this day we cannot firmly establish we're not in fact holograms, so ironically we may not be that distinct from AI digital intelligence in any event As long as something is self-aware, is able to create, is able to replicate itself and has the instinct for self-preservation, there is no meaningful distinction between the AI intelligence and human intelligence, except the AI intelligence is more efficient and higher level. And with that lack of distinction, we have functionally created life. Digital life. But life. Think of it this way: we have often heard of people describing that perhaps we can build some quantum computer that can store literally all of our memories and experiences and that way we can essentially become Eternal by digitizing those life experiences and knowledge. Replicating fully our quantum brain and able to put it the replaceable body of a robot. If that would be extending the exact life we have, then AI intelligence is no different.
youtube 2026-03-16T05:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzj8jyH3sf3dJesF8Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxm_dp9q7kwV4Or0_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDZxlv1YHWttUK1lt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyCYfU2YYXjQri-0El4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwtiX9ol1tVKunw78l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-c025rRBaUnU7BoF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzPC0HVkAKE_gLQpXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmxUM1cqURqpyOLvp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugynxd_8BA1FbR_DGnR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0L9gEvdwLduee_Pp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]