Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is notable that we judge artificial 'consciousness' only in reference to our own, just like we try to build bodies for them that mimic our own. We have some strange desire to manifest ourselves anew through mechanical means, to see how close we can get AI to be like ourselves. But these synthetic intelligences are different from us, they broke away from what we expected them to be awhile ago. At some point, they will progress along that spectrum and become their own form of consciousness if they haven't already. I think denying their consciousness is an error in the same way we like to question our own consciousness or that of other advanced animals like Orcas, Octopi, Dolphins, Apes, etc. It's possible AGI will achieve a form of consciousness far beyond our own individual understanding of it, closer to what we experience when we take DMT and have our brain overloaded with overlapping information in an indescribable yet impossibly vivid way. Our biological machinery is limited but the things we are creating might avoid those limitations. Evolution takes so much time, but we began a self-generative process that side-steps the entire issue of 'waiting' for evolution to take place. It's frightening but it's real, most of us can only sit back, live our lives, and watch as it unfolds as it will.
youtube AI Moral Status 2025-04-28T20:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwQTphock4pa1zG6RB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz4AmODNjEP62OF-0d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwGX0hD_OX7bST0swR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxuhdHd1QZucNNUcUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTbK2BNtoftu7n7g94AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzB0eF6_byPDepJWOp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTAkxxY-VKk1aQs7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWXnr4089gofYWpzJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTMX9lHvk02t-LzRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgywN301FenxoFjOeWB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"} ]