Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This whole video is built on the premise that consciousness exist. Philosophy will never reach any form of consensus, but the growing trend is to believe that consciousness isn't a thing, that we are just fooling ourselves. But whatever the answer to that maybe, this is actually the most meaningless question one can ask about AI. Beiing self conscious is something internal if anything, it doesn't have any impact on anything and that's why consciousness is not measurable. You don't need to be conscious to be "a new specy", we don't even assume consciousness in most animal species out there. it feels like "consciousness" is often confused with self-deciding goals in this video, which comes from desires. it all emerged in ourselves from the need to survive and reproduce, something we could potentially train our AI to do, but certainly don't want to. But if we gave an AI curiosity, rewarding it for unexpected outcomes, and for expanding its knowledge, it could certainly produce enough intermediate goals and emergent behavior for us to call conscious. I call a neural network conscious when it runs in a persistant environement and continuously learns from it (not just during initial training), with self-awarness (introspection capabilities and keeping track of its own states), and yes, those already exist.
youtube AI Moral Status 2023-08-21T10:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugw9u4eyvrqOT4sgZQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxiXDKDKbnlHsZ-iF94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyv9xWt6CgZLRPI5Mp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyEHKOELB_n2rsdR694AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzmKWe4RHf3UpSbBJV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7hXRZf0Pz0H68rCR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugws97o1bOtnRxPPR9N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwP_kKxQew89OjLTpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzw2UeKiZELfv5RbkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-w5gPVzf7LjLO2gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"})