Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs are just auto correction. They are not smart and cannot really "think". They are essentially a database where you store information in the form of weights and they call it "learning". The LLMs can then output text which might look correct but you cannot be sure. The output is not consistent (meaning same input == same output) and therefore not reliable. And even if it was, the input in natural language can mean different things to different people. You need the human context which LLMs don't have. Why should I give the task "book a flight" to an LLM when it might just buy a refridgerator instead because it's got the same name as the airport? Everyone is scared of "AI Agents" but we do have agents a long time already. They are called algorithms and they do exactly what they are told to. We should focus more on that instead of autocorrection. "AI" won't make humans jobless, it will create more jobs for people who are able to fix the mess LLMs made.
youtube AI Governance 2025-09-04T08:3… ♥ 18
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjfoeGAYWA31JzwE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznWK68YZFw6s5YAtZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxmLlIUmJ2ciU7I-Bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJa2voJCFAwuE1xER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxR-Z9e0O5se5HpVGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyggQonjWqUV602KjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxy57MacR0tExKIauZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz6NaCgeCQBe_uSU9B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUpwzltbLDajktKqh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyT5HKLV97TkGGMyGR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"} ]