Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Addendum: Soooo many people love to strawman things like what I just said by implying Im saying AI is harmless. So just in case: I would put money on AI not going AGI in the next decade. That in no way shape or form means I think its safe and harmless. The fact is, long before we had LLMs we had AI algorithms behind the scenes of many powerful systems that were already doing enormous amount of damage. This technology should absolutely be heavily regulated, and frankly I think much of what the LLM companies have done should be outright illegal, if the law isnt clear enough on that it should be made clear and retroactively applied. I dont think AI is safe, on the contrary, I think the kind of wild speculation going on in videos like this one actually risks obfuscating and distracting us from the very real damage these things are already doing and have been for much longer than just this latest iteration. Its very frustrating to have almost every discussion about AI be hijacked by fanciful AGI debates.
youtube AI Moral Status 2025-10-31T18:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxhOnrWN7_fgKSOS-x4AaABAg.AOxn95cNpLsAP-YYRFy2ha","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgyhxFL30B3lhfg5zX54AaABAg.AOxkxHTaEbVAOzRbU7J7A1","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyrknVmtrkp5GuJec14AaABAg.AOxg7ERjunMAOyh8uhQE62","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgyrknVmtrkp5GuJec14AaABAg.AOxg7ERjunMAP-__gAArSz","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyrknVmtrkp5GuJec14AaABAg.AOxg7ERjunMAP8bidSSLav","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy-9l3p47Y3HD5zs5V4AaABAg.AOxU9wjgwOqAOxV7sBCRoV","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwjdYfnsDQuw2Edxfx4AaABAg.AOxT4k3qO7zAP3Hr8dPGho","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwJFaZBAC01Nvug29F4AaABAg.AOxRnH3XJiPAOxRtnb4U61","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgwJFaZBAC01Nvug29F4AaABAg.AOxRnH3XJiPAOzeSC0e5El","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgwJFaZBAC01Nvug29F4AaABAg.AOxRnH3XJiPAP4AAALEb2S","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]