Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get the sense that Neil is not fully understanding how these narrow uses of AI could be wielded to their fullest potential. 33:12 Neil makes the point that AI can only use content that's been created or published via the internet, which is almost all of human knowledge at this point, BTW. What he doesn't seem to understand is that all of human creativity and knowledge have been plagiarized or influenced and synthesized via prior creativity from previous humans. We stand on the shoulders of giants. Even narrow AI can already use this process of previous knowledge to create original content via ever increasingly sophisticated synthesis on demand with images. We don't need AGI to spring forth. A sophisticated network of narrow AI is all that will ever be necessary. If a system of narrow AI becomes so sophisticated(smaller, greater in scalable synergy in relation to a larger spectrum of tasks, more efficient at mapping solutions within a landscape of awareness to potential problems) that it can emulate a system of brain neurons within a functional heirarchical catalyst oriented by a pragmatic consolidated system of archetypal means and methods(which could be developed over time with narrow AI reinforcement learning). It will essentially be close enough to AGI that we won't know the difference, nor will there really be a difference.
youtube AI Moral Status 2025-08-06T19:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxzklq2cpoKoRoPdvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxa441Vit3LCNTsTO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwlpFi2V3Rhb0Gqq294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxq9ZvAL937erqi0Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzfoXvHswhWNJ5HLTJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyislsJiEr-NoblNoR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyo-r4nOuF8rtdyp4l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz5_mA_towBTgCmNTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx9MFUUTjF5wQZfmWp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxpmCOzc06Zed6WYo14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"})