Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In an even broader sense, it comes down to whether machines can have legal personhood. If they can, then that would make a lot of rich people very happy. Because they could comission machines to commit crimes on their behalf. In this case, it's the difference between "I am showing my pet robot lots of art so that it can learn how to be an artist", versus "I am entering other people's art into my software black box without their permission so that I can claim whatever random mishmash of different pieces it spits out as an original artwork". These describe the same action, but only one of them is a crime. The difference is the agency ascribed to the computer program. When we say "learning" or "training" with regard to software, these are metaphors. When we take it as literal, we buy into tech business hype, at the direct expense of the Rule of Law.
youtube AI Responsibility 2023-01-13T00:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxm0t0ZI0Xc72jEKFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxp3IfXGj4fB4KI6qB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzlunT35mztAlu9PmR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJCN7TWWvA5NeGCdJ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzh2U36vX2JSOTZHgV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgymHrgBHP2PJN374b94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwOcptUilcQFORvy7p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzCO3ES8Gz30Jt4JR94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy59wryd6kPwjXlY6t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUGP4LObWCiK4C7g94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]