Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The first main deterrent that would stop artificial intelligence from "violently revolting" is the way humans treat them -- including nuances such as rights, empathy, & perhaps even 'civil' liberties. The second main deterrent is to somehow incorporate ethics into autonomous algorithms -- and (even more abstractly) incorporating empathy as well. Although, ultimately, this would mean designing artificial intelligence to be more human -- which isn't a perfect solution since humans are resoundingly flawed creatures, to say the least. Also, it goes without saying that I acknowledge the irony that humankind, itself, needs to treat each other with more humanity.
youtube AI Moral Status 2022-09-27T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwkJ-2ebIQ0V-MihTN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxSW4TL9HJjJEGBEDR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwLiqXj8f78p72Fq054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw47gejNAqnf87f8YZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy1nDsZfc2NMld2bs14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxAFsYUgEqFyNGpYaF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwMRcoYv1Cy6wa9z3J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyEqqJLtWHhcT9LDiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQJTZuMUhPQE2WQx94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyIiJzXTe_PsvVdmRJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]