Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A KlikBait Title but an interesting presentation. Humans ... ALL Humans (and most animals) have a set of "morals" that they follow as "rules for life". They might not be "good" they might even be "horrible", they might even be incomprehensible but ... they are there and act, to some extent, as a limiter. Further, there are limits to the extent that one Human ... or even a group of Humans ... can affect other Humans. Even the most twisted Human recognizes the need to survive as a species ... to, at some level, protect Human life. The problem that I see with AI is that it truly has no "morals" ... no behavioral baseline ... no real version of enforceable "ethics". As it has no "progeny" or, can assemble from parts an "additional" or "new" version of itself ... it has neither a past or a future ... there is only itself regardless of which version of "self" it is. It certainly has/will have no need of a "Vision of the Future" that is in any way related to any Human's version of a "Vision of the Future". It certainly has no need of meeting 99% of ANY "Goal" that Humans MUST meet ...
youtube AI Harm Incident 2025-09-10T14:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgynaA2QyD2_ge3C8Sd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxCxpPrSkifhaznV8x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyR_I-Whl8D9NwaJqN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxrbDjHGXk73sEQygV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzIFEjWEgLqQj0YdP54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxqHIZPnexj3cibMoJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgziWAttFkFUjIreOP94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxsh5McjKZuec42j9V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPCnGnu3uNbCoRQuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzI7_CMWlfbU-zap_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]