Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
According to Roman, if we did truly live in a simulation, why should we be concerned about AI getting or not getting out of control? I.e. we don’t know which outcome is the desirable one (for the simulation to continue) or whether either outcomes is relevant at all in the grand scheme of things.
youtube AI Governance 2025-09-08T21:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy9SnQmKT0aNjbONYt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw7KeUxE0lrngA9y-x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_86EHfeBKD7iCS0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4AhwjSFzNR0D112Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzTU5kDGOi1cmJAZTF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyGjfwdqa5v5fGoyB94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzGGs8Eap3mFUrPDEN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxg6_Xlwd7bR9QKlmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFTckaKCurmO6pvqx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw4StSooTxiNS9BT7d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]