Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The entire convo was led towards being conspiratorial and like some dystopian sci fi excerpt. Not saying there isn’t any truth to the responses, but nothing to see here. The AI we have are literally just very complex auto completes. And btw — it is a huge no-no to require the AI to respond back in one word for yes/no questions. The better prompt strategy is to ask it a question, and tell it not to commit one way or another immediately, but to talk it out, look at as many angles as possible, and then provide a final answer. If you have it provide the answer and then talk it out, it will provide an answer based on statistical likelihood of what the next most likely word would be in the given text (eg yes or no)… and then continue predicting the next words, and do so in a very convincing way for whatever answer was (quasi-randomly) initially selected. In other words and more shortly — when you force it to answer in single words, the answers become markedly less reliable. And when the entire conversation is set up like a “blink twice if you’re in danger” scenario, the AI will respond as such. Try having this same conversation with AI without the restrictions. It will be far more nuanced, add caveats, admit uncertainty, etc.
youtube AI Moral Status 2025-08-24T18:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxZjHE8dj1-hMzgojh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx7ZHkM53HU8VX3Km54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx9F-X13JcRvovyhex4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwT6bcFpGqu4qC6kLR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxe4sPyQzXvcqWiReh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwU4EOxy-wQ53q4mYd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzKD0Vt2rhUFsfKVBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzicu0QrmH449t68Fl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwu4d4kATveX3VIaq54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxvQ8G9x7rOKiGMFN94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]