Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As I listen here, I am starting to understand a truism about AI. Ai understands the past, and could know some about the future it encourages with its answers. TL;DR. As we currently limit the content that an AI consumes because of copyright, and we limit AI to a learning mode and an answering mode, it will not be able to track the results of its operations in the greater scheme of things, and the so even if you tell it to care, it won't have the information to care that it is ruining the world. If I have piqued your interest, read on. We are very reluctant to let it learn from and spill our conversation with an AI into other people's conversations with the same AI. It feels like it is "stealing your soul.". No, sorry. That's the camera thing, we feel like it's plagiarizing our thoughts as its own. So, as a result, ai's are trained on non-real-time data, data that somebody has created in the past. Things that were relevant and nuanced for then, but that that same person given experience might not say exactly the same thing in the current environment. I say all this to say that an AI is kept in the dark as to the results of its answers in the global sense, and there's a significant delay in that feedback as to whether things are going well or not in the world outside of the AI, both because it has to wait to hear how others react, but also that not all information. If in the future it is trained to care and modify its output if it is being criticized in the world at large, it wouldn't necessarily get all the inputs that had the criticisms in it. I would say this tends to silo the AI into historian rather than philosopher and confidant, so building a super historian and asking it to do all the things in the current day is bound to be difficult to align with the results that come out of everyone following an ai's advice.
youtube AI Moral Status 2025-10-31T08:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]