Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Guys this is very bad. Thank about it if a robot gets smarter and smarter it will eventually get smart enough to think like us and what to we think? Some of us (I am not supposing directly you) think that we should destroy each other we could make one maid take and boom we're dead 💀
youtube AI Moral Status 2018-03-22T02:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyMSJ9o1_bhqLzs66B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxkuh7GIeCXcmey9mR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwM1MLeuMWDIfdWgnp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6QbMM8hrYPK_plvh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzgOODT7dzPNKte9D54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPKRvy380w9AdBJRZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgysyJ8xbRjtUtbfK294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw-92fvUoLvGX52rvB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxe7nkl4Hel7oaoYZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbbPQoMeNDT-N48OR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]