Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I. can’t be bargained with, A.I. can’t be reasoned with, A.I. doesn’t feel pi…
ytc_UgzZ3FyDP…
G
Everyday humans can be so dumb. I swear we are trying to eliminate ourselves. Th…
ytc_UgwueIkwO…
G
Did you hear how this narrator pronounced "asymmetry?" This was an AI voiceover.…
ytc_UgzEatFc1…
G
Looks like the AI suddenly realized there's a better route and seized the opport…
ytc_UgzuPSFkh…
G
I feel like NaNoWriMo just got too big; as a bunch of random people basically da…
ytc_Ugxnxn9Fh…
G
@Astro4Truth Yes the people too, but they are super low priority because plebes …
ytr_UgzdRxSyX…
G
Code autocomplete isn’t that great but AI Agents are *way* better. They can catc…
ytc_UgxaEPrTx…
G
ai will reduce us to very small number to keep us as pets. we are spread all ove…
ytc_UgzKg7Llv…
Comment
Everyone told me „AI is so useful to summarize scientific papers” so I tried that in ChatGpt to help me write my degree. I’ve already red that paper multiple times, just wanted to have some bullet points quick, because you know, it was a big paper and reading through it every time I need one sentence is a chore. Anyway, chatGpt didn’t do shit. It was adamant that the paper was about human cells when it was entirely mice cells, and it told me it’s not on my topic, when it was entirely about it, and then hallucinated a bunch of points that wasn’t in the paper when I tried to correct it. Please, never use it in that way, it will entirely miss the point if it isn’t spelled out in the paper in the big bold letters for babies. Everytime you need to deduce something based on it’s findings it will be wrong
youtube
AI Responsibility
2025-10-11T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgytlyphVodYISkI0ut4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_MrrqJ1PTQDvge954AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZBuey2PGxGjzFxF14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNIu8Sd_6VQdPXMiV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxASKm6cUcl8yFSruF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2xWuReIXqbpg18bJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzb_Lk9mr3cWve8g9R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzq5bvEp2jKlH9rMcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLXpQMeEU0htZdEHR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxfy9FdAf4CFtxZ6K54AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]