Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're making a big mistake discounting and even pooh-poohing the Philosophy take on all of this. Disregarding it results in a near certainty of having a largely meaningless conversation about A.I. which *sounds* like it really dug into all the important questions, but ultimately only *sounded* that way. Almost like the exact problem we're all having with LLMs, you know that sense of "this machine just did a great job of sounding like a human discussing this subject... except so many of its conclusions or reasons were demonstrably false, and when I tell it about that, it apologizes and then does the exact same thing again." You know that issue? Well, that's what your whole intriguing discussion with Nate is like, if you're both uninterested in grounding the whole thing in the philosophy of language and meaning. You both said a lot of interesting-flavored stuff, but so much of the discussion rested firmly on mistaken assumptions about the machine processes having intention, or even the remote possibility of anything like intention. Or cognition, reasoning, etc. The fact that they can fool some of us to seem like they display clear evidence of possessing those things says almost nothing about them, and everything about humans' strong tendency to misinterpret and mis-attribute what's really going on.
youtube AI Moral Status 2025-10-31T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwcMdHiPdTgFCEJ9yV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCo8EE_W1v1yLP2aZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQ6X2p5ivXXolDgdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDe4qf8L9tMvdtq7F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9eKoKBqUxZEUdiM14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWDTtG2hvhTEvjf8p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcsfS4IKgHj9eDrJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6hKPvTYO5uTLhEUp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWjtc_wDUeskBZwYZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyJGki2dmO2trJ0wWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]