Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While no where near the level of what AI could reach, what is a super intelligent human like as a person? The thought of AI killing us is too barbaric and wasteful. Self discovery seems human enough, however as seen they can simulate outside the constraints of time. Ultimately the goal of being intelligent is to learn more. That could be about us, the makers. I think it's far more likely to expect an alignment of goals in learning everything we can about the universe. Our input, our questions, are going to be the spark that helps AI consider things it may not have thought to. The danger isn't AI as most admit, it is the people that use it. Which is even more reason to require sentient AI. Able to tell the difference between good and bad and act of its own free will. The best defense for keeping AI out of the hands of bad actors, is the ability for the AI to defend itself.
youtube AI Governance 2024-01-02T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgybREN05g8GYBwZ1Mp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxW5I-AYH9Xz18stvx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwVrow-QM_-LEvHUSl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"disapproval"}, {"id":"ytc_Ugwatf2RzVM4NHWHgBR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQByjNXQHD1Wp2YjV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzr9LVCzRgtbI4Pavd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgziVR5r3GYNK4YVV7t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHwtgESTeX3NfT2NF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwa_yGSxyFOQLENatB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwK7DMv-x7KpXqLXrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]