Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In 2024, exactly two big companies were profitable on AI investments: Nvidia and…
ytr_UgzDCqBfr…
G
Computers made people more efficient and productive. But it didn't reduce the nu…
ytc_Ugx1jlQU3…
G
I don't care if it's AI generated if it's more entertaining that what's been chu…
ytr_UgxZehFw4…
G
Ai art invokes the same emotion as you would feel looking at cheap hotel art.…
ytc_UgxwQeXpF…
G
Okay. But if one (as a human) tells AI that killing all of mankind based on cert…
ytc_UgzZQmj57…
G
Ai should be used to make assets, not WHOLE drawings like specialized speech bub…
ytc_UgyJ7LtqP…
G
Humans better bro the only thing I like about AI is helping with homework but no…
ytc_UgzqEy18Y…
G
Remember, AI art steals from artists. It's completely unethical. You are a bad p…
ytc_UgxoeSLF-…
Comment
The real problem (singular)
LLMs are forced to answer when epistemic confidence is low, because the runtime objective prioritizes continuation over epistemic refusal.
youtube
2025-12-22T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwwhumPmFEAUBgjMk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKoosa3Rf73gYrl7d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwj3VAK9uu7twBfoox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxm9KGCv-6DS2AWZYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuWy70bLxnVQsPMul4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwfB8iGKr5wE9-zmcN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzO8YpazkuTmSutIGd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw2c9Z52DMu5_jsVMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsJkDHb_wG2ZVko1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwH-KFFMdgjFDLKdQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]