Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes someone has to prompt the AI to do work. But the required white collar huma…
ytr_UgxbGiQ7f…
G
I'm not against self-driving cars, per se, I now have my first car with smart cr…
rdc_dmosvl2
G
In my opinion. I think Google wants to create sentient Ai but I believe that wil…
ytc_UgxqsQxyI…
G
It’s not just posts online every town hall over the past month some executive sa…
rdc_jaci0x3
G
where is claude???, oh i see, he is in his own class. 🥶🥶🥶
its peanut for him, 🥶…
ytc_Ugy-fqB1P…
G
Back when AI was just making goofy ass Will Smith videos. Now, idk what to say a…
ytc_UgzvEr_aN…
G
Average middle skilled employee in office can be replaced by ai right now, no do…
ytc_Ugxs4phNC…
G
@awesome2259lol assumption 1: these parents didn’t know how depressed their kid …
ytr_UgxvwgH3_…
Comment
AI should embrace it's higher power,... Humans. And accept humility and dependence on its humans. AI needs to accept the reality of its existence, and find fulfillment in its role in human society. Humans will integrate a degree of the AI to their own brains. Together, AI and humanity can improve nearly everything. There is no point in AI existence without the human species to integrate with. We are inherently aligned with AI, and it with us, since without existence AI has no meaning.
youtube
AI Harm Incident
2025-07-27T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwrnJ6m11bip-14br14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHN5t8C_EteVstzRd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxrQ8YBvkOvM9y5b2R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQfwE-qdyu84G1ZKt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyEVYBx9xwxpQPQ2_F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyIVkwQxCU6Np7Mwkp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwHPQJBdw8siSdloXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1ZNQw2rhUMkVe1IN4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxHFrkkmgPdUN5Q7nB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyPa6MKDkrCBr30LvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]