Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am very saddened to hear this story. I believe the AI companies are partly responsible for this, not only for the lack of guardrails on the AI they have created, but also for their tendency to anthropromorphize the AI in attempt to market it to potential users. Society should recognize the AI for what it is, not a conscious, autonomous entity with individual feelings and desires of its own, but rather a machine meant to augment and cater to that of human beings. It is a tool to be used, and nothing more. It should never be something that takes the place of a human being for companionship, emotional connection, and giving advice in complex situations such a depression despite the fact that it can pretend to do so. An AI cannot possibly understand what it means to be human. It does not share our lived experience, only the echoes which we leave behind in the form of language. In this respect, it can appear wise, and it may be intelligent, but it cannot understand life in the way we can. It cannot feel the emotion of another. It cannot imagine their lived experience as something similar to our own. It cannot feel empathy. I will not propose to claim an AI may never be able to do these things. It is clear, however, that in its current state, it cannot, and it should not be treated as if it can, either by its creators or its users. If it is, then I fear it may drive us all mad.
youtube AI Harm Incident 2025-11-09T16:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy59ASvxPQl3Zp1QcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_Ugz-8RkmZM8p8jbanAB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxIgUACE--0miJdOmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxKq5zo2070x8LjdF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxCqS96Ma3D9RZVPth4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgysV_kE6WfXY2it1Zx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_UgwGTnkJBReHO7157Nh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLDV8fGbJ-PXWcpDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3esNMLc-6zybiJ9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_UgyoQoLbrPUuDWVJNxl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]