Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great video. I’d like to suggest a fourth reason: lack of true self-awareness. Because we’re self-aware we can adjust our behaviour based on stimuli. LLMs are truly terrible at this - and they also lack any kind of long term memory (since the models are, by definition, the result of digestion of data and not understanding of data). I believe that AGI requires a form of consciousness and that the current tech just isn’t capable of achieving that.
youtube 2025-12-29T06:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyrzCYQ_xfGOZjdETh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwNR0pu_6IxR3-PeXt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwoSI3nf3NnnEhrfo54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx9V45DbAOfZ6mTYQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyZJZ177NJNbJEJ23V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxEuUPMgBbxGnYBQBB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw1Hh7h3dme67ZJcst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzs8Dq5lZcO2ecfpFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyECy1JrJdYgzogrut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwYFNNcFnPBkw08JHx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"} ]