Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AIs are not conscious. Even ChatGPT admits this. Consciousness requires agency, …
ytc_Ugx_Pwg_O…
G
Please bring this to a city near me! I wish this would have been the case for u…
ytc_Ugy4wSmNQ…
G
In the animal kingdom the lions rule, AI will be the lions, and in the end, the …
ytc_Ugyi03a23…
G
Do you guys not know the different between auto pilot and full self driving? How…
ytc_UgxHk49YM…
G
Well, something did change - GPT-5 is a MoE of tiny expert models, way too small…
ytc_UgwkeTvW3…
G
*she expects a lot.. it's a self driving car you eedyat! You can't create a calc…
ytc_UgxOJzff9…
G
This is so stupid how a Tesla autopilot car mad one little crash but people don'…
ytc_Ugz9cKj-F…
G
If you post AI art im tracing that shit and claiming it as my own.…
ytc_UgzFJMFMy…
Comment
Man this is heavy. The big question here is where do AI companies draw the line between privacy and safety, and honestly theres no easy answer.
OpenAI flagged this persons account back in June for "furtherance of violent activities" but didnt report it because they said it wasnt an "imminent and credible" threat. Then months later this tragedy happens. Really makes you think about what those thresholds should actually be.
This is exactly the kind of AI ethics stuff people need to understand: the real complicated questions. Like should AI companies be monitoring everything we do? Probably yes for violent stuff. But then who decides whats a threat vs just dark thoughts or venting? What about false positives that get innocent people flagged? What happens to privacy?
I offer trainings for ethical ai usage and we talk about how AI isnt neutral- theres always humans making decisions about what the rules are, what gets flagged, what gets reported. This case shows how high the stakes can be when companies get those calls wrong.
I dont think theres a perfect answer here but this is why AI literacy matters for everyone. These systems have massive power and we're all just figuring out the ethics as we go.
reddit
AI Governance
1771650420.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o6jbeg8","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"rdc_o6k732a","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"rdc_o6wn83f","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_o6ltv2a","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o6jyjk2","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]