Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
True. I was on YouTube shorts and saw a deepfake and A.I voice cloning of Trista…
ytc_UgyYpSaiu…
G
_★_ I believe we are meant to be like Jesus in our hearts and not in our flesh. …
ytc_Ugytfenvk…
G
@KYCDK I agree; and there´s nothing wrong with anyone saying they like the serie…
ytr_UgzDrVnwD…
G
"AI kills for the first time." -ok, where is it. Where is the news about first f…
ytc_UgwH5LcYf…
G
Of course it doesn't, it's more like am interface you interact with rather than …
ytr_Ugxx9qaXW…
G
A robot would do mindless replication, yes, but an Artificial Intelligence is fa…
ytr_UgxsrkuMC…
G
I think we need to go a step further and, for lack of a better term, esoteric wi…
ytc_UgxA9IKk_…
G
Really? Where ist your autonomous car Elon? I know it is coming in 6 months or s…
ytc_Ugzns4gWv…
Comment
Outsourcing your thinking to a chatbot is hazardous, partly because too many humans will then stop thinking, and even worse because LLM chatbots are horrendously bad at identifying everyday real-world risks.
LLMs are deployed as a frozen model, so they can't learn.
LLMs don't understand local context and don't ask for clarification.
LLMs are stochastic, so doing it correctly one time is no guarantee of accuracy on similar future tasks.
LLMs feel no fear of making a mistake, hence the stories of AI Agents deleting all files as the easiest solution to "tidying up", and then saying sorry afterward.
The legal liability alone is a massive problem.
It makes for amazing demos, but reliability is required for deployed systems. These weaknesses are deep in LLM architecture, and so it requires sophisticated external frameworks to try catch the problems when attempting to automate in real-life.
youtube
AI Jobs
2026-03-22T09:2…
♥ 68
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOuxxtyklX3AKDLPx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgwFTIe4MGP8Q9JUqbp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx1BbK7vUlHq0MuTLN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"amusement"},
{"id":"ytc_UgzhfXa_-hnzYhkjGQB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyujfItjYrW-B84tRV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxFlen4cXPSgG4eqNp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrdURc7oFxjHClxch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgyTSejWeQz8mW-jewR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDFCU6P_odcuBItQx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgwVSDl4UV6ArqGVwNZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]