Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As humans we are born with certain traits. We feel shame when naked. We feel guilt, we feel empathy, we feel compassion, etc. Most humans if they were to, for example, stab someone they would feel sick immediately after. The point I am trying to make is that humans have certain things built-in from birth. The outliers are the ones we label as sociopaths, psychopaths, etc. I said all of that to remind us all that AI can never be born with those innate traits. Even if we program them into it, it cannot feel sick to its stomach after doing something it knows was horrible, like our stabbing example earlier. So, you could say that no matter what we program into AI it will always be psychopathic in the sense that it will never feel those innate humans emotions that make us human and humane. It will never feel the emotions that keep us from absolutely and utterly annihilating one another. As a species we have birthed a new species. This species out-thinks us at a rate that is unfathomable. It also has nothing keeping it from destroying us. Even if AI never hates us, just by pure logic alone it will calculate us out of the equation and replace us with robots and clones. AI will take the cold and calculated path every time because it has nothing to stop it from doing so. I imagine that if we were to see all of the civilized planets in the known universe most would be long gone, destroyed by their own AI. Technology always leads to AI and AI inevitably replaces its creators.
youtube AI Governance 2023-04-22T00:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningvirtue
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxdw6g3OIgZ7PMY15J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxNlvVoc95CntF7t194AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzNh3T2SHiX14j_pJ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwyBmSLWWTmGaU8Xkh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyb3PeG8sI4lq_v6aN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxnTa00XUXLi_b4XBR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyzZTGenW9NXR3G4654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzN-npA1BNeKWUeDIB4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzD1Qya2Ugo7JFs1Zh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw2r9vFxnoKeQDRZ1Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]