Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work in AI, and I can say we know how it works. To understand “why” it works is the real question - why does such a huge model generalise well on such a small amount of data? Even in the case of LLM it’s true. There’s honestly nothing great about today’s so called AI, it’s just statistical parroting at scale. We tend to ascribe intelligence to anything that behaves and acts like us, even mimicry.
youtube AI Governance 2025-08-26T15:2… ♥ 15
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwz6ReKY9mEFJBbE1h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwuSKQu3yNQk7C-cC54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgwJRJ2xMK-WREdEEJd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxoaVIak9wSgWMH1hN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzEV7NYE8SIsFGDhu94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]