Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look at her right eye vs her left one. The irises are slightly different color, …
ytc_UgwLWU-jk…
G
People are always going to want things better, faster, and cheaper. Nothing is …
ytc_UgzIYAxbH…
G
If they paid us (AI trainers) a whole lot more, say, an actual fucking living wa…
ytc_UgzYn-TFd…
G
AI is dangerous because it is part of the end time, AI is not mankind friends.
T…
ytc_UgxcfxCtQ…
G
*Firstly, I don't think anyone has ever said:*
"Oh my I feel terrible for that …
ytc_Ugxg75PXS…
G
Funny how AI tools are everywhere now. Olovka's helped me keep my essays legit w…
ytc_Ugwgo8Q2R…
G
@thesaint635 what part of that is whining, and conversely, why are you whining? …
ytr_UgwUCvaEv…
G
Self driving cars are made to prevent traffic acidents, they see 1000x more than…
ytc_Ugi1caexr…
Comment
So since the tech people invented AI, and now they're going to be replaced by AI, can we beat to death the tech people for causing us to lose our jobs? It was supposed to be AI go to work, people rest and create art. Not people to be replaced by AI or die.
youtube
AI Jobs
2025-09-14T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz-vFky7cd4K_xfkp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxqi64HB36gsKkwPQF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwcULIiVI28xJ1qvx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw5a9deCTB1ZEBC2wt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5_Zi5VOSJhHwexSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyTaIgpXH5-9StlliJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMY5NTARwlVnOb-x14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwQU5WAcHWVDCT-Ww54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwlkkDV8n04j7eDAoh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwxzwEfx90iAo261TN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]