Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
GPT 3.5 became slower, losing connection all the time (requires to refresh the screen), seems to have cut-off points in longer answers (getting cut off in middle of a narrative description is like a giant middle finger). Those alone makes the interactions more tedious and annoying than in first month or two - to the point where I'd rather just google directly, than bother answering the "stuttering AI bookworm" who needs to be double-checked on most factual/code requests, anyway. Sometimes I ask questions regarding laws and morals, individual vs society etc - and while it did provide answers, most of text would be wasted at the usual "AI assistant ethical excuses" bs. I could learn to bypass those, but hey, the whole purpose of an AI tool was to reduce amount of effort put, not increase it further. I can't say if actual quality of answers went downhill. The limitations are certainly more noticeable now, but that is likely just from more experience of using it. I also did not try GPT4, because the changes in 3.5 are not inspiring confidence to subscribe. Would rather try out Bard/Bing/whatever first and see if they can compete.
reddit AI Responsibility 1682278387.0 ♥ 7
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhe6yve","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"rdc_jhgjr2n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jhdxmke","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jheeclp","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_jhfhjp6","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]