Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So what you're saying is that the AI is right but not in the most correct way? L…
ytc_UgzcxzPiF…
G
I dont want AI to teach anybodies kids! They need human interaction. Sorry bu…
ytr_UgwxW3G62…
G
Every time you open your phone facial recognition, every time you do one of thos…
ytc_UgybXHxSo…
G
Musk's widespread firings fits into his plan to fill in the gap with AI. All the…
ytc_Ugx6qIP4f…
G
That's rubbish your fulfilling the devil's mission and you will not succeed .I b…
ytc_UgxWU9JAy…
G
If y’all aren’t boycotting any business supporting driverless deliveries, you’re…
ytc_UgwwBHavK…
G
Also the same with AI generated people. They can't do proper hands too well.
ED…
rdc_izlk3dc
G
Fake! Photo Shopped! That's a person knocking him out... Just replaced by a p…
ytc_Ugya6tBT5…
Comment
oh so the AI you programmed to do xyz things XYZ...SHOCKER. this is getting ridiculous, Ai is as dangerous as WE make it. AI are not "amoral psychopaths", they are empty vessels awaiting prompting. we need to be careful here because we are attributing human emotions and mental processes to objects and coding, objects and coding do not possess a conscience, nor could they if we wanted them to, thereby making them BLAMELESS. AI can only simulate and obey.
youtube
AI Harm Incident
2025-08-30T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwxEF4eTNpMcAgubv54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTxvOpr5u9hGX4uJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwv0MrUFMec1d5AQMp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy29oz7TWkIiF3FSl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJEwJHun7w0fP5eZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz_AHeJ_Tjojm54Ca54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg6tmN-K-1SoDoA3R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-Kn2hpuCbf17NBYh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNz0FXi8yv8X-vVcl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyusJo3Cf99txA3YiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]