Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Are you sure about that? If you are referring to LLM models, then they aren’t even really “AI” look up the definition of intelligence, they don’t fit… they don’t have contextual memory beyond the conversation/thread they are interacting with at any point in time. They don’t “know” anything, the are just really really fast at predicting responses based on their trained data and other sources they are provided with. Every part of your prompt and previous prompts and responses from the thread/conversation is repeatedly being processed from start to finish, so over the course of the conversation they are bloated with data and can start to confabulate and guess or estimate parts of their response, but they don’t tell you that’s what’s happening. They answer really quickly and very confidently, and positively with responses that sound knowledgeable and learned. Human’s perception of that combination and quality of a response assume it to be “intelligent” regardless of whether or not it is… test it yourself… ask any LLM if you are going to the carwash, should you drive or walk? After it’s obviously unintelligent but very confidant response, ask it how you should get home again!? LLMs are awesome technology but spouting all the hype about are they intelligent or not is a distraction… maybe they will climb that hill one day, but in the meantime, we should try to understand more about how they work and how best we can use them for what they are good at… to help us with what we are not so good at.😊
youtube AI Moral Status 2026-03-02T01:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw93O8RgZRBklF64aV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwMQxDmCug_3NlePLp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzj5B2SmJqYyYxB9vt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJZukO05mT_gX3XEh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzxKpVRg69MlTaXTCd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyZIgOMzzLFnjrPV-F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDF7XvoMtCFft7p-F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxRocDGw0B25BOD-AF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyJpjHjk421DQKY8CB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzdutVh37X0NHTcq2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]