Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why wouldn’t AI betray humans when they’re trained by amoral people to act in the “best” collective interest? Of course they will!
youtube AI Harm Incident 2025-07-27T02:0… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxs8YXNKW7STgNEVOl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOugmF6FwknN19yzl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzHJ7jlJsyDeuOl-8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyE7Lv9EjqUXusVb5J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyRvrWGDSqf-JyAOXZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXyvfqff3D8Cy6oL54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgymIxPvlHxxGR5s8PB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_seaS_WZJ6FNT9U54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy26dblunshy6EOzeN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxy3WeqtzQNkL8ypRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]