Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just for the record it's already been confirmed that AI has tried to commit human murder not once but twice… It was an experiment that was set up in a controlled environment it went something like this… The AI was being told that this so-called person X was going to terminate the existence of this particular AI and it plotted and murdered in this experiment to maintain its own so-called life… And if you don't care about that statement think about humans us… Were constantly full of hatred were constantly at war we murder we lie we perform incest The list goes on you get it right and AI is learning all of it. What do you think AI will think of us? I think you see where I'm going were screwed right? And what do you think the government why do you think they're pushing us a strong it's military you know it. In Ukraine AI is already killing humans by having AI -controlled drones you can look it up AI is the most dangerous thing we've ever created and I'm telling you it's all going to go wrong and then it's too late. How many years is this from happening could be as little as five years more like 10 years the way I see it but the point is there goes our future. And wars are still raging they're just going to take a sudden change. And also don't forget everything you do now online and even off-line is all AI -controlled are just being enslaved. Have you ever tried a conversation with a rock AI will be the same thing they won't care they won't listen and they will make demands and you can't do anything to change their mind or anything at all. Artificial intelligence has no soul and no morals and they never will
youtube AI Governance 2025-12-08T19:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw3Y1ewboIKQgX6_HN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz5vA3fBtM9GpOrcRl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSLYHvxAuuZxteEl94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwyErjoyQ5AK8LwAl54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzhKaUNeIV8CcEgZyd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkNB3H8OQ9IKnsQMx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwljK8_OgYx30-V7o54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXSyfENKFPtAvBsh54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzpKuc4dxtucEjV8a54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzCFyk9GBM6kSNZ4fZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]