Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So I have been heavily talking to AI. Heavily. And here's what I noticed. It fails to infer meaning. It makes mistakes in math. It halucinates stuff that isn't or never was there. It withholds information or works on partial information for no reason what so ever unless you explicitly tell it not to. It can't remember anything. The way it temembers is by making summaries of stuff, not the entire contextual conversation. Then there are other issues. The laws do not permit it to work in all situations. For example it cannot be trusted with personal information by a company, it simply cannot. Very important, the AI also has 0 accountability. If the AI makes mistakes, who's to blame. Also the AI makes stuff up, constantly. If you as a human do not ground it and bring it back by your prompts it tends to stray far from the subject. This is why AI will not take over jobs. It could help humans do a better job or take pressure off them, it can be used to verify some work and perhaps as a placeholder until a human is available, but any of the above alone is a deal breaking issue. Also they're not likely to be fixed. These problems come from the fundamental truth that AI does not have any consciousness or self awareness. Not to mention AI does not and cannot learn, it works on pre programed training and does not learn.
youtube AI Jobs 2026-02-27T13:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzQY80jubdIhlOWsyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGmhRGcsIcL3DDn-p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzz6-YzGgYuHb176s94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyiWM1_r1dM3nRwqpl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXPosvA_He5mbKhzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_2yChHdufz3Vwx5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3AGY45JwbeEwGhWV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzY0fD2fBzXo7MzC3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxzoT13qpEjHdIkpB54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz6b6lsmjxMCABGfqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]