Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is just the result of feeding model with limited/biased data. Nothing new, …
ytc_UghRKrnLz…
G
I think people lack the understanding that AI is not the problem, it's the peopl…
ytc_UgyVWpeL7…
G
The number of subhumans in this comment section rooting for A.I. is disappointin…
ytc_UgypaiZYU…
G
This is a good point. Somebody on Reddit accused me of using ChatGPT on my blog …
rdc_kgpwe4i
G
Well he was a good model, a better ancester, and by god does he keep surprising…
ytc_Ugz0VpVp7…
G
Just reinforces the need to check primary sources, I also check multiple sources…
ytc_UgxlAyFmR…
G
Actually, if ChatGPT was a biological-neurological AI, it would do excellent job…
ytc_UgwBfAE29…
G
https://youtu.be/XVNH8MPRgVY?si=U-w0BB9S1wFE7gVp
This video says that How AI wil…
ytc_UgxKNPM3e…
Comment
Are these algorithms machine learning algorithms, like neural networks and whatnot? If so, then I'm not sure this is a good idea. When neural networks are trained on data, the program basically tweaks parameters of the algorithm (using calculus to tweak them in the right way) to make itself have the best match to the training data. The thing is, the end result is that no one has any idea how exactly the algorithm works. Because the algorithm want programmed directly--instead, it was programmed to learn--their essentially "black boxes." All we know is that they give basically correct results on the training data (in this case, that means that the algorithm would have correctly predicted future behavior for tracked inmates that were part of the training data after they were released). I worry that this will have unintended consequences. After all, if us humans don't know exactly why the algorithm generates its particular outputs (will/won't commit a crime) from the inputs (the questionnaire). This, clearly, isn't great. A particularly terrible example of machine learning gone wrong is when Google's image identifier algorithm starting calling black people apes. I mean, when you're dealing with a black box--which machine learning algorithms are--you're pretty much guaranteed to have unintended consequences (almost by definition!).
youtube
AI Harm Incident
2017-06-29T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxEoXJmsU2o6hW7-6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQ5-X22e_jw6AD6qh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzJdKeEiLauijES-xh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqiNKoyqTcAP4sAER4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxo75JzWjtKUEJa2vx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzYjmZgqwkL0p4GOal4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghETbDDLjdorXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjjbdybEHWzyXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugjn9WM4zF6TIHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugi75uTYKgdOH3gCoAEC","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"})