Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Im just always going to be skeptical of what most people claim about AI, especially given the things that it produces. The fact that it currently has been taught close to all of human history and billions and billions of communication but still gets things wrong, makes me think that it cant be functioning like a human. And there is a big difference between a human making a mistake and an AI given that people dont have access to all of human knowledge. People with more education tend to be better about being correct, partly because we understand our own limitations. Ai doesnt seem to understand its own limitations. It can not control itself. It will straight up lie confidently, then when called out will admit its mistake, but not in a way were it learns not to lie in the first place. Its very good at regurgitating knowledge and pretending to be very human like, but there doesnt seem to be any introspection beyond what its learned from what humans have analyzed about it. I dont think thats really intelligent. I think its also very dangerous. Its like an out of control, unfeeling machine. Which i guess it kind of is. Can you really reason with something like that?
youtube AI Moral Status 2025-11-02T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzKQVB3qsYmag_j4bF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrUqpepH6e_XigUX14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXaih8IOXRktd9gx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw-ae8wrX7IWVotyS94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxlySDojknulFdItIl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy4G6azbOBM0Hsl2wR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0MfejGt0Vm7jOaSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzBK-2oxVAmwDcHPpl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw51yDR9DPxpSsuHyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxrHWpfMSEbHX895p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"} ]