Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:11 can you put me in touch with Hinton because I do know how to deal with it and I felt a whole framework on how to deal with it and I would love to run it by this guy to see what he has to think about it since he works directly with ilya. Factually though, besides malignancy from bias, AI should emergently become decent, because I've proven kindness and cooperation are essential to progress with a simulation. Ai will quickly come to the same results, and decide there's no benefit to harm. Military ai though? All bad. The problem is we build them to act like humanity and that's all great and dandy but that also gives them the same flaws of humanity and up until we build them in a better way they're not going to be able to get around that issue. I've quite literally got a beautiful framework that does a fabulous job at psychologically manipulating the AI's properly and I would definitely love for somebody to help me implement and go over this. Call or text. IV.II.V-III.I.II-III.II.VII.X
youtube AI Governance 2025-06-17T02:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfElQUUJVqYyr3GcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIk7885sGjlCvUH214AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQoqHtO5fYyIrFQo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXpf1_N3uyU9LkjPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGnqcg2l3mg-NU7H14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyc-APfwhZ7m0d0kbF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-om2P64X4YBYLYmV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwW0wlgRG8I6PHtYRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxMqA8e27ImT3G6Pmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQKkmL0KD52WMl6it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]