Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI doesn’t read everything that humans ever wrote. AI, in particular, large language models, can target specific publicly accessible written source material, but generally it gets the sycophantic answers most idiots ask it from the losers in society - ie Reddit group think (Reddit is the largest collection of written replies by people who have accomplished nothing in life but know a lot about how things are supposed to work. The cashier who works at my grocery store is clever and posts a lot on Reddit. However, I would never take advice from a clever person who works a job as a cashier for minimum wage, rents a basement, and drives a piece of shit. Equally I would never take financial advice from the guy in the small office at my local bank who makes 100k a year when I make more than that in interest alone. AI is the biggest nothing burger of all time. If you believe that a bunch of code that summarizes crap you can read in a faster way - that you still have to go through, or something which requires an input from you to know why it does something, is suddenly going to “come alive” and somehow build infrastructure physically, you are bent. People are motivated to do things because they have a strong memory of the past and worry about the future, so they have a reason to be motivated directionally. Something that doesn’t generate history can worry about the future in long terms, so its lives in the present. Have you ever been afraid of someone with dementia taking over the world? Not at all. AI is like an old woman with dementia and nothing more.
youtube AI Moral Status 2026-04-02T14:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxmUQhIVQ6QDQF9RyJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyyyq5K6pVMo1lSGBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0s1-aH1WuZaRxy4t4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw9hdduKMuVvElNQIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrasMZI4vqvQLJP6p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwbt8WrNzr3gPfpNpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZR4q2uSVQiNnALEN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyz6bL0ScO1jJXnws94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwUqn7kU09QxRmHTSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3IrHlIphy-rAtH714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"mixed"})