Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Up to the point of simulation theory, dr. Roman seemed very reasonable. As soon as they started talking about simulation theory, nothing made sense anymore. Toward the end he was arguing that „if we get AI right”, whereas in the beginning he was advocating for stopping research into super intelligence. His views on religion are sadly superficial.
youtube AI Governance 2025-09-27T07:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy2hLkqiCZZKpEds_R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJjj8SjZgkmx4h-ll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx4RhaSciIWVxdmi3x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbrQ1Ptc7uzQercMt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyONc5OoknA1ozJgtV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYaFSxmEEAajjOkKd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz41DQgHn0UIzpiYVB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4p-nsBmEqqEK3e3t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-syA1tn0fodBO0AN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxUS0a_5WV7cz9mapB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]