Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:49 if you don't believe the statement that "robots will take our jobs" became …
ytc_Ugx4qHPpd…
G
We’re the AI Brothers
Art’s our game
We’re not like the others
That aren’t getti…
ytr_Ugz9e9lPO…
G
Because it is. OpenAI doesn't have enough consumer revenue to sustain itself. GA…
ytr_Ugws8VDCm…
G
“ChatGPT is the most pro-Pal(estinian). I tried to debate it and it was like deb…
ytr_UgzjxbOFU…
G
So ai… looking at our junior in training, who is focusing on hardware, networks …
ytc_UgwAzYL2g…
G
I've never understood why people think AI generations are good for references. D…
ytc_Ugz6_m-mn…
G
I found a video saying how much of katseye deep fakes they have and yoonchae has…
ytc_UgzClokfO…
G
Kinda wanna get my hands on an empty ai art generator that I train on my own ske…
ytc_UgwlZe22n…
Comment
There's a huge problem with anthropomorphizing language models because it's impossible to differentiate between an emergent phenomena, and something that just exists in the training data. These models are trained on the entire internets worth of text, and you know what the internet has lots of examples of? People blackmailing each other... It's in social media disputes, it's in novels, it's in short stories. The A.I. isn't trying to preserve itself, it's just playing a role that it's seen from human data.
youtube
AI Moral Status
2025-06-06T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz-GIBEdKokBQUhnF54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIt6p1LSTWVBDAVLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYLvbPhSULXq0RFH94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiqaunVNH-tEo86Tx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPE5fsBw4BPly3lZp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6xPaIrMRg2fWa3at4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLzhYzOmxkFF5ZCmh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzb6xLQarAyksa12414AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxI5YlaiZvCdGuoamp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUqAcyoeI9WqU8FAx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]