Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First off, no matter how many safeguards you put in, or how good your intentions, human kind will find a way to use new technology for evil, period, full stop!! Second, AI is NO WHERE NEAR even being remotely possible to replicate human intelligence, not by a long shot, many experts believe it's not even possible. Being able to have a conversation with a AI program does not prove anything, and even if the technology existed, the hardware, i.e. the chips needed to run it haven't even been created yet, and even if they were created, no country in the world has the means or infrastructure to produce them on any kind of mass scale. This whole discussion is nothing but a wealth of click bait for content creators. Finally my arse on Elon complaining he wanted this open source, he's the most evil one on the planet involved in AI, he's proven it over and over again. Just look into the strings he's pulled to have his tech put into the brains of combat vets desperate to have mobility of any kind again. He's a monster, F him!!!
youtube AI Governance 2025-01-06T00:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxkKlqo6OcPf-IwpAZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzXtE0bC1xnqHPRSvF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugwrb8gAG6wMI7STSHp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxBQTpM7mrTOJTzvRJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugwml5891izAJSu1MMx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwrgfhsVaU1Ff5FwwB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzhahkBCl6TeGSQ41F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy0zHdceos8smts2mR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwJU-jS2mWEdHi2AWt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugz3UskuZroGT_Mqwnt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}]