Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People wonder why Lake Mead. Is no more well AI, drank it all up. So that's why…
ytc_Ugz8kA-k_…
G
So, what you're saying is that I can use prompts to adjust the AI short film tha…
ytc_Ugz7qd7Xx…
G
As someone here who has used stable diffusion...
Its output is incredibly blah.…
ytc_Ugxum1J3F…
G
As good and idealistic as the initial idea of AI. but i'm scared of what corpora…
ytc_UgySHMrhE…
G
The person who invented AI technology is just helping people to make all kinds o…
ytc_UgwPUnUee…
G
Thats what Humans have been doing since they were introduced to the planet dont …
ytc_UgxUQmYVF…
G
These guys are idiots, you put people out of work so big corporations can fatte…
ytc_UgzrhvsO1…
G
I asked Grok about this and it told me in 2017 Facebook's AI researchers were wo…
ytr_UgzQFCjwg…
Comment
I've been a machine learning researcher for over a decade and imho the real risk from these algorithms isn't some terminator doomsday scenario that organizations like openAI are condescendingly crying but instead the risk comes from people trusting these things when they probably shouldn't. More than one place has done an experiment with algorithmic policing and with a naive 1000ft view of the subject it actually sounds like a good idea but practically speaking it is a horrible idea right now. The datasets are extremely biased and therefore any algorithm will also be extremely biased. There is a saying in the field that is something like "junk in and junk out" and that refers to the data used to train the models. Even if we were somehow able to get perfect data the next problem is that with large models even the most informed researchers on the planet basically have no clue how any particular decision, or more generally any output, from such a model was formed. Just as an example, we would have virtually no way of knowing if the reason a model said to increase police presence in an area was due to biased policing in that area previously or if there was actually some real pattern outside of bias that the model was picking up on. Maybe some day we'll have models that can give us a chain of reasoning behind the decision so we can say for sure "this is not racist" or whatever and that it actually found a real pattern that will help decrease crime but right now that isn't a thing that is possible with current tech
youtube
AI Bias
2023-06-26T00:2…
♥ 36
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxVtMgKuxMaRlqbn4h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwySjTylRnouNaZXZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-mglB0TwSnkI0srl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgylCBPtYb_Krmo7lap4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEjnPapaJ1epJmTkJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxfyy7JF8ksetVci1F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvR6fLxdePeoVT2jF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzi-ZZ2TLVhjtUR_bZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDjRH_eHdksTXfeSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwG0qVZAheY73BoPQ14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]