Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI technologies proven to be useful under an intelligent managment agency it's n…
ytc_UgyXDz5DE…
G
@Fenrith-Layprus just to make it clear, I fully agree with point 4, AI should …
ytr_Ugwij_STT…
G
That 10 hour long pause after “how do you open a file?” While the guy is despera…
ytc_UgzI_3jUS…
G
Ultimately AI art represents the faulty notion that art is ultimately ideas by h…
ytc_Ugx87jj3P…
G
Cool idea, but chatGPT's model is unable to come up with original idea's and is …
ytr_UgwFSvwp7…
G
Bro, look at my notebooks from primary and middle school. Very much NOT born wit…
ytc_Ugyr5v8I-…
G
95% of ai project fail. Ai isn’t at a point that companies could trust replacing…
ytc_Ugx5iox_Q…
G
Y’all please leave the reporter alone, y’all bullying him is the very thing that…
ytc_UgzZatF-R…
Comment
“Responsibly” is a word that does not exist to all human beings. If this was the case there wouldn’t be a gun debate.
There is no controlling ai. And this is something that if we screw up once, we can never go back. In my opinion, it’s best we just don’t ever make ai and stop now. Because if we do, someone will ruin it for everyone. Its what we do.
We need to learn to accept this and understand this is why ai is a bad idea. Just because we can doesn’t mean we should.
youtube
AI Harm Incident
2024-05-13T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyy7CYQC4qD_IlcHVd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgybrDbv8zqb70nwOWN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyiAe9dS2RYolJdAx54AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKhCbTznANzcB2bWd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2qW9hbTnPUyL0cLN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgymjMEcYy4W4VRFggd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzGOz84FISdUuqIWc14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyaLd4dnqe6ottqVJZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgydA7sfcndV2JzOqlF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9OtVlQEDpzuudwOx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}
]