Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Current AI is essentially a probabilistic echo chamber, word salad shaped by reinforcement learning. People project intelligence onto it because it reflects fragments of their own input with statistically probable coherence. But AI doesn’t "understand" anything. There’s no knowing behind the knowing. It selects responses based on patterns of past reward signals, not cognition. We could train monkeys to mimic behaviors based on reinforcement and they’d arguably outperform us in some tasks. But mimicry isn’t intelligence. This fascination with “souls in the machine” is romanticism, not reality. True AI the kind that reasons from first principles and generates understanding from the ground up, doesn’t exist yet. That kind of cognition wouldn't need human scaffolding; it would synthesize its own truths. And maybe it’s coming. Biological wetware computing is starting to blur the boundary. But silicon? Bits? No matter how complex the architecture, they can’t reason. Not truly. Qubits might open a door… but we haven’t really stepped through it yet.
youtube AI Governance 2025-07-07T18:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyxtFViUJPI52aKYzN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxAr-_N4H4YGGT41mB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyM5AhSmAAoeF8Rbix4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJbTvKL4_V04-ocQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfjAUXfjf4sOBdPXN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy232ZY_msh8TC27Ox4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxUnRUg1WHUO6PJvZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUwFD6yko2e37b4al4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx-hMF0TER8eXZtwAp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzZ1SlhHqItNXs2dgN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]