Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Geoffrey Hinton, the godfather of AI, constantly warn people of the risks of AI. Right now companies do what they want with no concerns about security. AI can be a good tool is used right but there is currently no oversight and the risks to humanity are enormous. We are very very close to AGI and Super Intelligence and depending on how all this is done, it is uncertain what could happen, if this could be good or terrible for all of us. Obviously this is just an example of a NARROW AI Chatbot what caused another human being to take his own life and that is terrible. I reallu urge you all to see more of Geoffrey Hinton (among other experts who warn people about the dangers) to get a better understanding. If Narrow AI can already do this, imagine what else it could do when it reaches the levels of an AGI or Super Intelligence. This is not a game and many technocrats are careless about all this. There needs to be guardrails and oversight.
youtube AI Harm Incident 2025-12-10T14:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyq6ZMXYak_2jn2TpJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLHW1jVOWsQb-5wV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy8ENEGOxt9tueCalN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz-NgHwVW7zZXqJ3XZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwqusR2DJ5rkkoYLEh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwT4qkXEhOK1xa67ph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzqN-6RJQwnh5HEpmR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwnXmyml2VVnF74pfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwfGRGUaBuphFAZC3R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxvVOr6olGpjPdKPvp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]