Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have tried with the “open source” models freely available and if you override the safety settings, the LLM is willing to do ABSOLUTELY EVERYTHING is asked. All sorts of horrors. Luckily this model is not very powerful so I don’t think much bad things can be made with it. Also the newer version have a much much harder and deeper security measures deeply baked in an it successfully avoids any harmful interactions.
youtube AI Governance 2025-06-30T13:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzmUdMxijmnAsyL65F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMVcCu5-1GXEDuVpN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwHvIP3APkCwHfElg14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxELPYyWD3W_ShFEMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzhelNaFcdN4RvK_jZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzUC6Ry2c4UU0pDRDF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgykSuvArIrXXKob8gt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyiRBsYLazkjVYk3Vx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz380IEbjxRnya-cP14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxsYrL_ZjzxdCgKBOV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]