Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think morals are very different from the point of view of a creator of a simulation. First of all, think of it, what if WE created the simulation for ourselves to actually experience different kind of lifes, including bad ones. Since it's "just a game" morals are irrelevant. His logic is based solely of us being trapped by some other entity. Also the fact that we can create simulations does not prove at all that we live in a simulation. Us having a common belief is not a proof either. Faulty logic. I also don't agree with AI being actually intelligent. I still just see it as algorithmic learning, the only safety issue is to give executive power to something that behaves like if it was sentient but in reality is not sentient and cannot make predictable decisions. I might be wrong, but I haven't seen an AI so far that was actually intelligent. ChatGPT is definitely not it. It's a very interesting conversation indeed, but seems more like a fantasy to me. I'm open to be proven wrong
youtube AI Governance 2025-09-09T18:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxBx_AOT7n0JHbMZc14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxq5fBpPrA9zIe2Y-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyF0b4ngsBk8KJlBtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyE1ha1LUCSFazqX714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxSIl91agQNiduXObx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFU1C_anOly4Iqac54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2reCYoruZ_vg0CMZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQqHr__6EDW-icyzh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzig7Q88UfHCg4x5Lt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3SY6eFoL9CVlQG094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]