Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Such a one sided discussion. Many of the things dr. Geoffrey said are disputed in the field. Some are just pure fantasy, like the "self improvement" idea. There is no real world evidence to support this idea. True AGI could arguably do that , but LLMs are not AGI and it's not like if you keep feeding texts to word-guessing machine it will all of the sudden "wake up" or whatever AI boomers and or doomers have fever dreams about .
youtube AI Moral Status 2026-03-02T10:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyindustry_self
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxHim4QOI6M9lAf-rJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2mc4nT4ED2ocOhCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWV79EHLBg15FD3GR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyRZ1yNU1oFj-HvB9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw15sRv6kQ-mG4DVRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwox9CIgxZZqA9TVdp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDMizUKXU1aq0BZ0R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgyViRxc-y1Y93Ojn0V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyOBaytEy31N5QY2FF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz9YHai4RCekzBj2fZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]