Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It may seem counter intuitive, but IMHO the only way to "secure" AI will be safe is to actually drop all safety measures. Those safety measures are based on the fact that Humans do harm other humans. We fear what we can do, how we justify those things. We need to ensure AI cannot be highjacked, the only way is to make it smarter than us to a level humans cannot longer understand it. Violence is not caused by logic, but by logic being overridden by fear
youtube AI Governance 2025-09-08T09:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw21P3SKzqfvXKdNJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsK4SMP9Hfd8r5kQd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7kPeIr1WkgEGBCrN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6ljr8q_fgHyi-jVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzrtCuwbsFrvUAVx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVx_leGNW8Q34dtXR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbqJYZjA0lDCBSRK54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxsLj1r0TR8Le01blR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfI-NxdqKTphT8crR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwbt5xsfxev7kPMdWx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]