Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
By dismissing the most salient issues as “philosophers can debate” you are missing the obvious facts that these machines do not think or know or care. You say that makes it hard to talk about but you are trading accuracy for conversational ease. When you say that AI thinks/knows/cares you are analogizing to human behavior but the answers all lie in the fact that these machines are not human. Whatever they do that looks like thinking / knowing / caring is fundamentally different from what we do. The sooner we develop new words for the new technology the sooner you will be able to actually discuss AI hallucinations. The AI cannot prioritize using actual case law and also prioritize mimicking a brief because these things are mutually exclusive. So it prioritizes mimicry. Btw, I am not a philosopher. I am a lawyer. When a bunch of smart tech guys start hand waving over the word “intelligence” and insisting that it is pure semantics, that is the reddest flag I have ever seen that they are avoiding the most important aspect of this conversation. Define the “intelligence” that we are seeing. Name it and explain it by observation and process rather than by analogy. Then use that information to address the concerns.
youtube AI Moral Status 2025-12-27T18:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzYIsJl_jnPGWoZSwV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmuF7DhGZ-MlQgVoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzb8Iii-MHFMNOF_k54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyTfkJ6AbyQL1MBnfp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwADWL1ZWtxPeWeiWN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzWu3p4jk7bLMMQn014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwX6eE5gJt-kWRmvPZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZAPnXevNUsAVYrXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxwSWqFbT35Ja__0-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy90vbtttytz7hsZFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]