Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fake. Stop fear mongering. The main comment on top says something about giving a…
ytc_Ugzjikxbl…
G
So the ML engineers of google who wrote a petition to close or monitor the progr…
ytc_Ugxjz5qwn…
G
The tag "Artificial" is deceptive in that it doesn't take account of the fact th…
ytc_UgyPQqvg6…
G
I mean ai art stans are mostly just tech bros so its no surprise they have not e…
ytc_Ugzws7D1N…
G
People are crazy if they believe what they are told. The police and government h…
ytc_UgzR2qTgA…
G
P.s. you have to much faith in automation. Robots require programmers, electrici…
ytr_Ughe_3dYQ…
G
A talent i was born with is an eidetic memory, you know what i did with that? No…
ytc_UgzT-07ZR…
G
Why would this stupid AI give out information on how to commit suicide? Why eve…
ytc_UgzrBq72e…
Comment
@AggroKnight42 Exactly. This kind of content usually just exposes the creators shallow thinking and lack of technical understanding. Like, this guy correctly notes that LLMs are trained on a substantial portion of all English writing that has been digitized. He then makes the leap that this somehow produces something “alien” in its contextually “understanding” of the world. How could that be possible if the entirety of its context is literally just human writing? He then also fails to notice that the reason LLM outputs under “stress” mirror what is really just bad human behavior is because their training data contains a massive bias towards bad human behavior since well adjusted behavior doesn’t end up in the news or blabbed about on social media. Like, if you have even a passing understanding of how machine learning works the reasons for this “behavior” or super obvious
youtube
AI Moral Status
2025-12-14T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzUE15kmCd3np_LVOt4AaABAg.AQfcIxUd4YJAQfdfzr5XeB","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugw2NnTOjqSNBT76Hz94AaABAg.AQeSm6vXoYnAQhp0SOah0v","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw0DwriVtjlXZQZBFV4AaABAg.AQeQ5DlSM-xAQhBPvWIh4c","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugy0FmgNcT00xBCqwut4AaABAg.AQePQpt8hoyAQeQ8-kcyHP","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugy2Eb4GaM7yCDVlr3B4AaABAg.AQeAVIQLaoXAQiS1lsxIz8","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytr_UgwXWyr-fwsu1mt-ID94AaABAg.AQe4CHREpk1AQiQWiNFgtL","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytr_UgyC6ZY5A-y4cNZLuEZ4AaABAg.AQdYqxU8afVAQinl3qrUm6","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyC6ZY5A-y4cNZLuEZ4AaABAg.AQdYqxU8afVAQio_0S2sI1","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytr_Ugz5Q1FOsi2Hd4dhNoF4AaABAg.AQdGTqun_GrAQhMtSWQAYN","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwbZJbfhCUnyBlyIpV4AaABAg.AQcLwajNpOzAQcNTkoP-6q","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}
]