Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@bonniematthews7611 The reason that it is possible to work around the "guardrails" is because these AI models are not intelligent at all. It is just a way of using statistics to guess what words are commonly used together in the context of the prompt. So there is no such thing as an AI model being "willing" or smart or anything. They will in fact bold faced lie right to you as well. But it doesn't know what a lie is. It is just not created in a way that it can recognize that it doesn't have enough training data to give accurate mathematical statistics that are meaningful for the prompt given. They are getting better. I self host ollama and open-webui and you can have the AI model search the web and eloborate on topcs that it doesn't have training data on yet. You can also provide documents, PDFs, and ask it questions about the contents etc. They are getting better.
youtube AI Moral Status 2025-06-08T21:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwnGOGHF3uKzuxkUMJ4AaABAg.A7uWD8jGmsdA8IvnWfZU8c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzKUmdsp8FTPtngwWN4AaABAg.A6nU44vvzFNA7ApIOJAu3T","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgyeoUrPrkMf4Z2F_JR4AaABAg.A6EpFX3-BdMA8Im-Q7m2Yf","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgxBG1byki1SbFhH68d4AaABAg.A5orumW-4hXA8ImDhwBICO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxBG1byki1SbFhH68d4AaABAg.A5orumW-4hXA8P5jOWXbne","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwTRrX2S5z7NREvwG14AaABAg.A2v3TkU7LC9A2v3lLqtvZA","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugz1zjx5aTfRCYdlEb54AaABAg.A1x4sSlyfh5A1zdX3wJ7Hi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz1zjx5aTfRCYdlEb54AaABAg.A1x4sSlyfh5AJ7P9O5M6lK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz1zjx5aTfRCYdlEb54AaABAg.A1x4sSlyfh5AKYBqzvhKkl","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzjmAPw5lZWTZUCxhl4AaABAg.A1IQDyt97GNA1Mn1HUDTzO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]