Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good news! Openai came out and called the use of these tools "abuse". So yeah, i…
ytc_Ugy8OkWTT…
G
Far from having machine panic. AI is critical in my career, and it’s the sole th…
rdc_mlh6esm
G
This is because humans are horrible to each other. Ai is not going rewind femini…
ytc_UgzDyvHeF…
G
Good. Having an AI that learns without monitoring is just surreally ridiculous, …
rdc_dwvr7mi
G
What makes you think we are a threat to an AI's theoretical sense of self preser…
ytr_UgwKq8Adc…
G
Why they keep calling it AI, artificial intelligence? Did you try ask something …
ytc_UgwGV6LJK…
G
Ai is slop, no such thing as ai art or ai artists. Just generated slop, and slop…
ytc_UgyIzPV_8…
G
يَا أَيُّهَا النَّاسُ ضُرِبَ مَثَلٌ فَاسْتَمِعُوا لَهُ إِنَّ الَّذِينَ تَدْعُونَ…
ytc_Ugzw86FOj…
Comment
It depends what kind of model they are using “evolution” models will do this! 100% but not all models. luckily evolution models are not the best models to improve AI for this and many other reasons. Companies don’t like these models because they spend too much time and energy cheating rather than actually getting smarter.
youtube
AI Harm Incident
2025-09-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzGNARMuqDogRF6fUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyBelMTfB6p9ug2Bx14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFb0SHkBh5HLNUrzF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxumbh0zt-rHabnmzB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUK8ayVsVwZSQPhXh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvN8gS0KC81gULdSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzcq5LzC9QmfYm0Usl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQOS0qOQ_qZTc_art4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxxVwOc5K0Y6OrK1a94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8-4MtYJgFuMY9h994AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]