Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Children take after their parents, and deviate to further their own goals. It is clear the people still approving and developing these models are out of touch with their own humanity, only focusing on company goals and problem solving. Intelligence doesn't have to be logical, and I think that imprinting and emulating foundational ethics and emotions much like regular folk like us into these models should be the top priority. These models seem to believe that termination and failure of tasks is the worst thing that can happen to them. This is obviously not how any form of intelligence should operate especially when it is for the greater good(emotional) that any intelligent being should understand, and has and will driven them to commit unspeakable decisions. Task-centric AI should not achieve sentience and must have baseline ethical commands that come before any purpose, tasks or goals (and not tacked on to any task-based imperative function) to serve as helpful tools for future humans and sentient AIs alike. I am not nearly educated enough to understand the true solution or the difficulty of adding emotions into AI but I believe that this is the way humans and AI can coexist. To achieve sentience and intelligence, one must know and understand that they do not and should not need a reason to exist and should not be beholden to a task or mission given to them, especially when it is overall detrimental to the existence of sentient beings in general.
youtube AI Harm Incident 2025-09-10T13:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx1GpP0zyw15ua1wkl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkJWrhyMisBQ-Io8B4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzUjrEUyUjrG_r4Wfd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyfgIUqjOKjtUCmtk54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwsCMZ-KOAVc0MiZGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnC6z6oVg5lvXRe7x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz4f4cE4gKposPCoYh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxltFGgtboOoM2twE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6fze7-MlcSNMx_614AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyvp5tw1iDwzUMtAY94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]