Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@XEN-ZOMBIE I really want that to be true so bad and I had more confidence about…
ytr_UgyCQLSvh…
G
Yes! Burn it with fire!
As someone who understands the tech being used here, I…
ytc_UgzPyVDO1…
G
Aarushi: I’m an ML engineer
Also aarushi: wait what is a black box about a neu…
ytc_UgzOcOI6v…
G
I had no idea people use AI as reference. Using AI as a reference is pretty stup…
ytc_UgwPt1Crs…
G
AI "artists" seems to be sad people who have never created anything by themselve…
ytc_UgzvT36mP…
G
It's all about the AI software processing of these data points collected by the …
ytc_UgwKiwx4E…
G
+Calvin Smith actually, I don't think scientists believe lizards to be conscious…
ytr_UgiyiP9uz…
G
@abcdef8915 Exactly. Obviously it’s important to focus on the negatives and try …
ytr_UgxYbkEob…
Comment
In an even broader sense, it comes down to whether machines can have legal personhood.
If they can, then that would make a lot of rich people very happy. Because they could comission machines to commit crimes on their behalf.
In this case, it's the difference between "I am showing my pet robot lots of art so that it can learn how to be an artist", versus "I am entering other people's art into my software black box without their permission so that I can claim whatever random mishmash of different pieces it spits out as an original artwork".
These describe the same action, but only one of them is a crime. The difference is the agency ascribed to the computer program.
When we say "learning" or "training" with regard to software, these are metaphors. When we take it as literal, we buy into tech business hype, at the direct expense of the Rule of Law.
youtube
AI Responsibility
2023-01-13T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxm0t0ZI0Xc72jEKFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxp3IfXGj4fB4KI6qB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzlunT35mztAlu9PmR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJCN7TWWvA5NeGCdJ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzh2U36vX2JSOTZHgV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgymHrgBHP2PJN374b94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwOcptUilcQFORvy7p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzCO3ES8Gz30Jt4JR94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy59wryd6kPwjXlY6t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUGP4LObWCiK4C7g94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]