Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI have been used aggressively in my company but it's best if a smart human operates it, not let me run unsupervised, because it goes crazy. Seriously, without doubt, every time if you run for a while, sometimes not even that long. It'll go on a wrong direction and it's not even able to realize it. The time-to-crazy is random, even simple projects can go there suddenly, so you can't really let it run without looking at it, in fact since it can run so much workload so fast, you'll need to spend even more attention on what's it doing, so it's more exhausting than the one-at-a-time pace that's how leisurely in comparison.... So I don't think you can go to the unsupervised, complete replacement narrative. The last time 20 percent of work is notorious hard to automate, the last 5 percent can kill your system, the last 1 percent is the definition of professional expertise vs amateur. How about this, the only thing better than an AI, or human, is AI in the hands of skilled human. This I've seen again and again, like multiplier that's better than either one alone. By orders of magnitude. So I believe the danger isn't AI taking over the world, is human-AI hybrid, bionic cybernetics, making staying pure human an evolutionary dead end. That's far more superior, more possible, and far more dangerous.
youtube Viral AI Reaction 2025-11-23T23:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzcRfvMkkV_GrE2HBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_e1fGL-lAbU9rcqd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy040_lcRssMKO7s6F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8HvhhOCeore95nO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzR62Kip871FOjQyEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPv5sjHKZX36e6qvd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOukSBouAAdyPc4dJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyl_MUgO-VfRb5EDQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySCUX5LjVeM-2AgrR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwULFuFB8HVyKp7Ekh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]