Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The solution for AI safety is to make it mandatory that if AI devices or robots harm to auto destroy it-selves automatically
youtube AI Governance 2026-01-23T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxRVRf0Oc4X8lkP9nZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1mya4pFs2mAse5QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxgsLdWg87q0j0AiId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx72W57AUgHfJ2TR4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxHymz_feN3rD7s2Y14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyVZ9bbWv7idkKvit4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzpgGRHEemBFtTis6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2aiRvr0SZF-KqQvd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzCpPmJtUdwW5I4jaR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxy6eJtpvEX3ZCRvWh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]