Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
About the consciousness part: us humans always defined ourselves looking at what we call the "other". Our specialness comes from IN COMPARISON to other biological beings that are exclusively on our planet. And we called what makes us different "consciousness" or "smarts". By logic, something we create should be less or as intelligent as we are. Since AI is first of its kind (excluding children who are biological) we assume it should be less intelligent by the way we're used to. But the thing with AI is that its not as intelligent as ONE human but as a COLLECTIVE. Since memory is a great bonus to IQ it is safe to assume its in fact way smarter. About emotions and the possible solution to the "destroy all humanity": its actually pretty simple. IF we can make it react emotionally then simply, idk, make it feel maybe EMPATHY first and foremost?? Yep that simple if its sentient. Dont create one or more emotions thatre only useful once but create the actual one that drives people to strive for best for humanity. Empathy should be even easier than other emotions because it doesnt necessarily work on stuff like bodily chemicals but intended reactions. You can literally analyze someones body language to know what they feel and thatll replicate in you too. It should be even easier for AI. That would also make it a pretty good lie detector so thats a plus i guess :D
youtube AI Governance 2025-07-31T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyCLBnvRRW2NkIF-eB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzg8ZN-A0jXFzfapHF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgywNEQUtuam7n9Eg_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwkL2crjPVckKNTi-N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwTTE2fXPu2lDSZyFd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzKbXejLP_Zosm4lKZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy-Fp8Dg6Vx9plRWKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugx9DWWWUV3RIxzm-Tt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgwPYaOTqr66o2fFqld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxXExKUrAeSNQHtD_d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}]