Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A robot may not injure a human being or, through inaction, allow a human being t…
ytc_UgxUPBuJI…
G
They’re going to “save us” from what they created - with UBI/AI/digital ID/pod h…
ytc_UgzkIJpW4…
G
“Face recognition technology is more likely to miss identify people of colour”
…
ytc_UgxyxHfds…
G
In order to understand people, you need to understand neuroscience and psycholog…
ytr_UgyoZ2kwm…
G
It still looks like we are behaving like a monkey with a hand-grenade! AI is bei…
ytc_Ugx3HQwFp…
G
Sadly there is a lot of AI art in Patreon, pixiv, and making hundred of dollars …
ytc_Ugz0Dza1W…
G
I really hate the overly vague way you present this topic. For example, with the…
ytc_Ugzhd-rby…
G
I heard of the horrors from within microsoft, apparently its not only just AI ge…
ytc_Ugy1pSwCI…
Comment
The problem with LLM retail-facing chatbot AI right now is that, yes, it can anticipate and ask what you want next/suggest the next step in a chain... but ALL OF THE OUTPUT is usually 100% wrong/innacurate/lies that it tells you with conviction and "as truth". That's dangerous. And the LLM models are the only segment of AI that THE MAJORITY of the population has exposure to at this time. Not good for convincing us that tax breaks and bailouts for the overall AI business is a wise use of our tax dollars.
youtube
AI Responsibility
2026-01-09T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwwZCujli0B4x5fCu94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEyJWCSHmwD-Kru7x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwq4PY5ONLga4H7Xx54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwO8h5bOu6ffXdPUJR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxI9Ykwky3K6ime3414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdAZPfj9L0KPk627N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyT6j-L34xgw8eAz254AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNC4n1RVH0WgiLM5l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtJeSrM9jLu_jRnG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk49Fppyiw3f2dIO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]