Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Couldn't? What are you talking about? There are already AI aps that can make ent…
ytc_Ugzs8_cse…
G
I just broke the filter by having a overly detailed character (I made the chat b…
ytc_UgydC05ff…
G
Would you happen to know what you do in an AI account? I'm curious. I guess trai…
ytr_UgxZJzLMR…
G
Yeah. Anyone in the field knows current AI is dumb. But most of us are huge nerd…
rdc_ktu7hzl
G
Elon Musk is right, but his motives are not noble. He is upset because he wanted…
ytc_UgxeQOzD_…
G
As an artist myself I love AI and I don’t think we should remove it or say screw…
ytc_UgyxV93od…
G
I USE A.I art generators, and I didn't understand why it's such a problem. Thank…
ytc_UgyplD0As…
G
this is an useless conversation... a self driving car would maintain a safe dist…
ytc_Ugj-9Fzht…
Comment
LLMs are just auto correction. They are not smart and cannot really "think". They are essentially a database where you store information in the form of weights and they call it "learning". The LLMs can then output text which might look correct but you cannot be sure. The output is not consistent (meaning same input == same output) and therefore not reliable. And even if it was, the input in natural language can mean different things to different people. You need the human context which LLMs don't have.
Why should I give the task "book a flight" to an LLM when it might just buy a refridgerator instead because it's got the same name as the airport?
Everyone is scared of "AI Agents" but we do have agents a long time already. They are called algorithms and they do exactly what they are told to. We should focus more on that instead of autocorrection. "AI" won't make humans jobless, it will create more jobs for people who are able to fix the mess LLMs made.
youtube
AI Governance
2025-09-04T08:3…
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjfoeGAYWA31JzwE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznWK68YZFw6s5YAtZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxmLlIUmJ2ciU7I-Bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJa2voJCFAwuE1xER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxR-Z9e0O5se5HpVGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyggQonjWqUV602KjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxy57MacR0tExKIauZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6NaCgeCQBe_uSU9B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUpwzltbLDajktKqh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyT5HKLV97TkGGMyGR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}
]