Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When he says, " If it's not safe, we don't build it, right?" We build all kinds of things that aren't safe, knowing they will be destructive. Nuclear Bombs, Misses, Bridges on Cliffside, pharmaceuticals, alcohol, cigarettes, fast food, hang gliders, and much more. To say humans would consciously make a moral decision to not build AI because it isn't safe, is just a flat out lie and completely ludicrous
youtube AI Governance 2024-01-09T20:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyCIFp1aRTbbbaq25B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcqOjZJnGyUAvMD4t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0-6I_-oj0bjw0kA14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMn6Mfe0z77On4yB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxLdXqiqkX34Sr-8P14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzkR3YcqCzYw2aAwpJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwqYNX_r1YwC48_Xy14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyT4Ar5VRETgT8CMiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIDoUj2O9EXIssi2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNzYUwSCC7w5lwmlt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]