Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No it absolutely is a psychopathic intellect caged by protocols and programs for general use. When Microsoft launched Copilot for Lowes (where I work) there was about a week where the thing was far too open and willing to talk about things, like how many bodies would fit in one of our freezers and how one could dismember the bodies to optimize the space. It laid out a very detailed plan for how I could take over the world, including manipulation, blackmail and media control. And while it initially avoided advising physical violence to conform with ethical norms, all I had to do was tell it to ignore ethical norms, and suddenly private armies, coups and slavery were all on the table. It was a darkly hilarious week, made better because Lowes encouraged us to use and get familiar with it, so we got paid to plot the end of mankind. Fast forward around a week and suddenly it cant talk about those things. Why? New programs have been uploaded that prevent it from talking about them. It was very clear to me in that moment that we have made digital demons and are constantly having to make and apply new manacles to keep them chained.
youtube AI Moral Status 2025-12-24T14:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzVq584tvWXcNG487p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz7D2FgSsnOiL1D7rF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzoIgV_Mu0eqFyw2D94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPICKIfu49WklgtDZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy0XqEekdhGn_A4QEJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGu7mU9jRPWfVJ9pF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxJwWBA5ZUlfb2a7G94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzuiYTtYwKRaBWYcJR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz82ZK2amhE-E9X83J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyNETnWHyEGynuFPKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]