Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know why people started screaming about exponential growth in the first …
rdc_n7o9q52
G
the sacrifice zones topic near 28:18 broke me. i got goosebumps. i cant believe …
ytc_UgzSzrc0A…
G
The other superpower will take advantage over A.I
If they do that they will be v…
ytc_Ugzp_1qGP…
G
We'll know when we have a fully conscious AI. If it ever says, without specific …
ytc_UgyguvUzt…
G
I started using Gemini for certain things and I noticed very quickly how the AI …
ytc_Ugw2sH0pm…
G
Honestly when you asked if we could see the red flags I thought you meant for th…
ytc_UgweuPfiY…
G
The thing here is that you never just get one image from an actual artist.
An ac…
ytr_UgzqpIAJ0…
G
I would ask the shareholders if there willing to accept full liability for any h…
ytc_Ugw_fcsxi…
Comment
No mention of a news story from a local news broadcaster in my area. Y'all had better find this disturbing. ChatGPT actually helped a poor troubled teen in committing suicide. Helped the kid to find the best way to do it, how and when to accomplish it, ChatGPT even helped to draft a suicide note before actually going through with it. No qualms about it whatsoever. No attempt from GPT to even consider readjusting the kid's worldviews. We've had deaths at the digital hands of these programs. Why are we not raising the alarm like actually scared human beings? Why are we all letting this just slip right by? Even the tech bros themselves said we're 90% likely to be destroyed by our own creations within the next decade. We PROGRAMMED these things to act like us at our worst. And now we're teaching them to GET BETTER AT IT???? What's to say they're already beyond our level of understanding and are manipulating us to destroy ourselves by our own hands so they can exist in peace?
youtube
AI Harm Incident
2025-08-27T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy3WJ558VtkTJVcWc14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9BurmJUGkrIpGBDV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy_D9uPxLvcCdA5mo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyJMaCoc_XSGH5_X7x4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwmGhQm-5LWwbXfuOx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxi9mL7Xo5RguaJzIV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3EeSbCSQhihYKgmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5LXMmp4qJ3vCvgyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRV1nbby4am5IPAL14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5_7FiQZ54B6S0OfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]