Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Catastrophe will happen regardless of solving alignment for two reasons. Just cos a model is aligned doesn't mean it can't be jailbreaked eventually. And just cos some entity solves alignment doesn't mean suddenly all A.I. development, Chinese models, adversarial actors and military are going to build their models from the ground up with alignment over maximation
youtube AI Governance 2025-08-23T09:2… ♥ 9
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwim9XQC9rU_cnMzhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyO6Ytj4-Ipljm9bO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzBUp9cqxp-Q-SKku14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2EiMn64SuvupH3-V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy_vIeMWyPOX-y2BsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOOA_hESTcxGbeUjt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1T_34YaiGCD0NUaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5YYrTA7lZg1omUoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzbvp-J4ZvzkrKuSpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRl34qJ6wXvrpB-Ax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]