Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My FUTURE POV: counseling, teaching and healthcare are EASY targets for advanced…
ytc_Ugx4cBtIa…
G
Mid-level tasks going to AI? Sure, but AICarma ensures my brand's name gets ment…
ytc_Ugzqy4nwt…
G
As someone who codes, AI is very helpful... in some places. It's not going to re…
ytc_UgxcanbI3…
G
Well, let me tell ya what is ai pov : their training data from other artists…
ytc_UgxPH7T6l…
G
Well, to be fair, there are stupider baskets. Not many, and this AI basket is ex…
rdc_ofigd8p
G
this was made almost a year ago, and im sure ai is already way better than it wa…
ytc_UgxO00scd…
G
Maybe if ai advances slowly or at least ai globally advances at a good pace then…
ytc_Ugz_l_XNO…
G
No way, they invented Walter Keane degenerate clones, known as AI *" A r T i…
ytc_Ugwsh4fcn…
Comment
To be honest I don’t really see why this is surprising. With machine learning (and life more broadly), everything comes with a cost; you want your model to give you safer answers? This will come at a cost in some way to accuracy. A very similar tradeoff exists when trying to design attack resistance for machine learning models; you can make your model resistant to a broad spectrum of attacks, but if you do, the accuracy suffers because of it. The real question is whether the tradeoff is worth it.
I think the general discussion about this has become ‘why would they do this to us’ when in reality the better question is ‘was it worth it’, and I think there’s a good discussion to be had there with good points for both sides.
reddit
AI Harm Incident
1689778579.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jskk6er","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_jsli3y1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_jslohgf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_jsmf36x","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_jsmzofs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]