Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The volume of code is not equivalent to the value of code. More code isn't bette…
ytc_Ugxww4hIr…
G
Everyone should get paid if they created something that someone else wants to pa…
ytr_Ugz_wID9o…
G
Seriously?? This is not a genuine conversation... it's a script built on manipul…
ytc_Ugz--6IYO…
G
As a low level health care worker in the US, how can I get my hospital to try an…
rdc_fjzuen9
G
Stressed with existing AI tools because they are too complicated? It's time for …
ytc_UgyW09Teg…
G
This is an issue that has me genuinely worried for our future generations, our e…
ytr_Ugz0ESt-w…
G
The purpose of AI has always been to replace/supplant humans rather than augment…
ytc_UgzhxHpQy…
G
"Creating value" and "helping others" are on the chopping block too. What we are…
ytr_Ugy9rSzZ6…
Comment
I work in an area in which AI-based solutions are being introduced rapidly. Currently, they are very bad. The errors are big and require a lot of manual fixing (at present, the work would have been faster if it were done entirely by an actual human when one takes the manual corrections into account). BUT -- one needs to actually check the work to see the errors, and one needs to know what the real result should be to see the error. My concern is that people making the decisions to purchase the AI solutions generally have never performed the tasks they seek to replace with AI, so they cannot readily see this reality. The output "looks" correct, therefore it can be viewed as correct. And most AI solutions, at least the ones I have seen, do not actually "learn" from corrections being manually inputted.
My true, genuine concern is that businesses and institutions are happily adopting solutions that will lead to incorrect outcomes. And since no one can actually audit the "thought process" of AI (the whole point of it is that it's not an if-then logic running it, but a derived mechanism), it's almost impossible to point out errors, unless they are straight errors of "fact" (whether information or e.g., an incorrect step in a mechanical process).
AI might get better, but I don't think there are sufficient controls in place to make sure it is actually working correctly, i.e., making correct decisions. We are being experimented on as companies release AI tools that do not do the job they are advertised to do, and we have to live with the consequences of those experiments.
youtube
AI Jobs
2025-10-08T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwrNt2sOCZ27ouVaS54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzBPHotJZseun3hV3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8xB4YkG5hJs5T2w94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLqZ1AqJjqeN3p17F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxNfCvXIWXx8SPU1LB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwY1Rq4KOQ0rQN2VsN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwm51F0nVJCH2R5AiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyJxR5bDAVfctwUFx4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGR8SWk5bdr9-L-cZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYyDIvRhldaeoxmdh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]