Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video makes a hell of a lot of assumptions. If the majority of the people a…
ytc_UgwtC6I00…
G
Everyone has learned from someone else. Everyone has copied another artists styl…
ytr_Ugy8Nr1xv…
G
The situation isn't that, it's like looking through social media to find out a r…
ytr_UgwVOZLje…
G
Honestly you are too adorable of AI, it may be smarter but will it be able to cr…
ytc_UgxS23xgf…
G
Capitalism always wins. Artists can make Ai prompt packs and then make passive b…
ytc_Ugw9oQXdI…
G
It's the same with all this AI junk. They're just going to farm all the research…
ytc_Ugw_bMg6Q…
G
Ai is going to be a problem but those who are building the AI technology robotic…
ytc_UgwAy_akY…
G
NEVER, repeat NEVER, repeat *NEVER* trust the AI software to be 100% accurate no…
ytc_UgwBWHgRR…
Comment
Modern chat bots based on LLM:s don't reason or "know" anything. They don't apply strict rules. They do probabilistic inference to guess the next token.
In many cases the result is the same and when it's not we call that a hallucination.
We should in fact consider all LLM output as hallucinations that frequently happen to agree with reality.
If you ask an LLM to play chess it will suddenly try to make an illegal move because it does not know the rules of the game and do not verify that the moves it makes comply with the rules.
youtube
AI Responsibility
2025-10-09T15:1…
♥ 41
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw6nia84y7t65s1IK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwetN_J-bOhbvQ4AZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2gU0N5Dw-auyxiqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYXf3CX96H63RqqLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuSj4A4OdQyojcwhF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJQOxZEcS9YCSRwXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyiP4eJ62vMkZ8jwOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAXm2Ng6LNYsjTgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGrYIX5b02__DNTK14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx6WtzleY_3Ez_qvY54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]