Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ramnoon6447 yeah i mean its not really that hard to prove that they don't know anything. On one hand, they contradict themselves constantly. It's pretty easy to get them to contradict themselves in a banal way, not in the way that a philosopher might contradict themselves. Facts are things that don't just exist in your mind. They correspond to things in the outside world. OR they are true in the sense of mathematics, there's an underlying consistent logic. These models don't have access to the outside world. They can't engage with the outside world. They just have language data and talk AS IF they know about the outside world. But they can't know what a fact is. This idea of how animals and humans know what facts are is AN answer from embodied cognitive science. On the second definition, they're not trained on clauses either. They're trained on human language production. It's easy to write code that "knows" what a fact is in the second sense, you just write up rules for logic clauses. In that sense, computers have known facts in a banal way for a long time, and LLMs are actually LESS concerned with facts because their strength is in their black box construction. Anyways. It's true that there's open debates about what really a fact is or what really existence or experience is. On one hand, that means you should adress multiple sides of that debate. It doesnt mean that you cant say anything about it. An AI trained on just language just produces language. A person imprisoned in a jail cell only sees the wall of the jail cell. Even if we can't define conciousness, you at least know that much.
youtube AI Moral Status 2025-10-31T11:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugwza1mVB8TWkmA04Dx4AaABAg.AOvA02JSTawAPQ-dPIDcHg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxqWyhZtzQCHEiIUT54AaABAg.AOv9yJE_FXGAOvCyVLGxnR","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxwuAol13egUtpLs_t4AaABAg.AOv9qcfbXlcAOvJmmSok-f","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxwuAol13egUtpLs_t4AaABAg.AOv9qcfbXlcAOvMnFNDYZD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwxqDERJo-sXunM51J4AaABAg.AOv9o9sKC1gAOvJj_WcXwT","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_Ugyl6gXZeneSWSmic8B4AaABAg.AOv9BleYLq4AOvQPK8KSxk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugyl6gXZeneSWSmic8B4AaABAg.AOv9BleYLq4AOwkzn6Cqjn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyUQhhgCnVPe_EpxBp4AaABAg.AOv8vYp9ddgAOvjnsAOp_c","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgyGzV4p_AWQhCNVB454AaABAg.AOv8v0u16HcAOw67fHLGBX","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwCYqWq4Qbl8PSoD514AaABAg.AOv8jxc80nxAOv96h7KNon","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]