Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is fuckin retarded mate. I've spent the psat two days trying to fight my computer to get this computer program to work properly. I send in my code to ChatGPT and it gives me the really long and plausible answer. To the point which I think "wow, this is impressive". It's like whenever I don't know how to do something, I'll Google it. The great thing about Google is that the results are web pages written by humans so you've actually got experts teaching you how to do shit properly. The advantage of AI is the fact that you can be very very specific about your problem and even send in your whole file of code for it to analyze. The major problem is the fact that it's convincing even when there isn't any substance to back up it's reasoning. Like it writes this really plausible sounding essay about how to solve your problem. It's like you don't any better because you don't understand the problem. That's why you asked AI in the first place. But holy shit - when you actually try out the advice given by the AI and actually implement what it tells you to do, it's totally wrong. Like, literally it gives me like 10 lines of changes and a overcomplicated explanation of what it's describing but it's just totally wrong. It turns out I literally forgot to change the old variable to the new variable. THAT WAS IT: ONE STINKING VARIABLE. And it couldn't even get that right. Fuck AI...
youtube AI Responsibility 2025-07-25T19:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwJc4GaWLUwX2NBsup4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4Re6AfE1cH8iXKh54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxbaTXHk2DV6nNrOIN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwqI-Zuhs6f85A5PMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzw-kZ73biH3tl8Ogd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwtKcyf_rraqLctEVV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxihLetPsbZw-D6XrB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgywMxh9fb399JyeDJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfewhFucVedIc24Wl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwardUIHZ1N4T8VA494AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]