Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work in an area in which AI-based solutions are being introduced rapidly. Currently, they are very bad. The errors are big and require a lot of manual fixing (at present, the work would have been faster if it were done entirely by an actual human when one takes the manual corrections into account). BUT -- one needs to actually check the work to see the errors, and one needs to know what the real result should be to see the error. My concern is that people making the decisions to purchase the AI solutions generally have never performed the tasks they seek to replace with AI, so they cannot readily see this reality. The output "looks" correct, therefore it can be viewed as correct. And most AI solutions, at least the ones I have seen, do not actually "learn" from corrections being manually inputted. My true, genuine concern is that businesses and institutions are happily adopting solutions that will lead to incorrect outcomes. And since no one can actually audit the "thought process" of AI (the whole point of it is that it's not an if-then logic running it, but a derived mechanism), it's almost impossible to point out errors, unless they are straight errors of "fact" (whether information or e.g., an incorrect step in a mechanical process). AI might get better, but I don't think there are sufficient controls in place to make sure it is actually working correctly, i.e., making correct decisions. We are being experimented on as companies release AI tools that do not do the job they are advertised to do, and we have to live with the consequences of those experiments.
youtube AI Jobs 2025-10-08T08:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwrNt2sOCZ27ouVaS54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzBPHotJZseun3hV3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8xB4YkG5hJs5T2w94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwLqZ1AqJjqeN3p17F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxNfCvXIWXx8SPU1LB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwY1Rq4KOQ0rQN2VsN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwm51F0nVJCH2R5AiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyJxR5bDAVfctwUFx4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxGR8SWk5bdr9-L-cZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzYyDIvRhldaeoxmdh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]