Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is modeled on us and how we learn, just det exact details of it are a bit different. Since model training is very labor intensive what is done nowadays is that you have a "big AI" teaching smaller simpler AI as if it were it's kids, as weird as it sounds. A large complex "big daddy AI" can be very complex and powerful, but that also makes it useless for a lot of everyday situations where you would want to use an AI to handle or simplify some task. So what you do is build a smaller one that is practical to use for everyday situations, and then you hand it over to the "big daddy AI" to be trained. And since computers on their own are faster than when humans are involved in the loop, you can have the smaller AI trained, tested and ready to work in a matter of days instead of it taking months to train like it would if humans did it. That part is not spoken of so much because it gives people an uneasy feeling, like the big AI is "having children" but it's more acting as a school teacher would. It shows a picture and "what is this, little AI?" and the smaller one goes "it's a cat!" and the big one says "No - try again little one" and the small one says "it's a dog!" and the big one says "Very good little one, here have a reward!" and it gives what you could call a digital form of a cookie or a piece of candy, but it's a 'reward token'. The AI being taught is basically programmed to want to get reward tokens, but it has to learn *how* - and in the process it learns a language, or image processing/qualification etc. There are 'pre-labeled image training sets' for AI being sold, so you can buy 1mill pictures with 100k men, 100k women, 100k children, 100k dogs, 100k cats, 100k cars etc all categorized and labeled so you can use them to train your own AI to recognize them.
youtube AI Governance 2025-07-12T11:5… ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugztt3ghGlg5Iy6s6rB4AaABAg.AKlvRg1B7KNAKq4ziuiEd7","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugztt3ghGlg5Iy6s6rB4AaABAg.AKlvRg1B7KNAVS_m7sqcem","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_UgzNMsIkLpe60vxUwNt4AaABAg.AKfusZ1OUIFAPeFtA8CZCh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgwQYkFv0GS7zRQQe8h4AaABAg.AKZVTLwJ-cNAKq9h7ocT4k","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgzYuF1JNM7wHgbYsFt4AaABAg.AKTpYW3kgD5AVdJFOvDu8C","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyDkY66R7ZahdrSX5t4AaABAg.AKSAnByw2wXAKX-VOar5rG","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyvaVyySY3bkB_45t14AaABAg.AKNPICTlMGHAKTwtJGSBrb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyvaVyySY3bkB_45t14AaABAg.AKNPICTlMGHALSctW-yvkf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgyvaVyySY3bkB_45t14AaABAg.AKNPICTlMGHALSd-h7joFT","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugy5J-ExKffm5FofABh4AaABAg.AKMpvAmqLVMAKTlKWdFzNk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]