Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How long before an AI encourages someone to harm another human being, perhaps to replace that person in a relationship? The implications for vulnerable minds is terriffying.
youtube AI Moral Status 2025-08-09T13:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugzw38dK2TbzLLk79zZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxod2zCVN53moSbGy94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyJc3ZCH7wmHsSiREp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxI4k6pjmFXS3mVNzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugzor7yUukoYl65c9oZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"unclear"},{"id":"ytc_Ugx352IJEolGM_gZZqh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgxmD5lFdwvxncVpEGt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy8D4ERakVc3iWdRlR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxiCFl_UXd0ypxmsJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"ytc_UgzAcVDHe6pAzRcEx5V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]