Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Problem in my opinion is that AI doesn't know anything. It can't think abstractly because it doesn't have knowledge to think about. It can't avoid hallucinations, because it doesn't _know_ what it should say. It can't differentiate between real and fake prompts, because it doesn't even know what is a prompt.
youtube 2025-11-25T15:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgycAHJI6QF5fdAG-lB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxblKZaLzAT3mu6TIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzc0eBBvKwNR0D0OyR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdrVJ53CPxqnfxNPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzs2q2y09RHe1gU1Td4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwqAMRbRMphohcakdd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhAR1gvu95cACD9_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhQ2fUeLaSnGl4sIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwtE3N6RYpad645Cp14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxINFB3kenh1VjuTql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]