Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s insane because the world has more corruption going on and if this is all th…
ytc_UgyaFOOux…
G
What about Ai jobloss? Or ai as weapons? Or Ai job agents everywhere? Why can’t …
ytc_Ugz5cGMHL…
G
AGI is atleast 20 years away...... if at all possible. I have been a believer th…
ytc_Ugwsr_jFy…
G
A.i robots and drones make better soldiers,they never miss a target and they hav…
ytr_UgxLEdK04…
G
That's all large language models do. They weigh up the inputs and predict the ou…
ytr_Ugw0HHoay…
G
I don’t believe it. I’ve used ChatGPT and it’s always very PC and says to ask f…
ytc_UgwsMqmSP…
G
This is so scary and no one realizes it. We are trying to teach machines right …
rdc_e7jrf20
G
AI will be the end of us all, yeah you too person developing it. Once out of the…
ytc_UgyYjrz6Y…
Comment
Myself, and multiple women I'm in a support group with, are experiencing our abusers using AI to help them leverage legal letters or legal threats against us. Many of these letters are still outrageous in their demands and barely make any legal sense. But they clog up time with our lawyers, and cause severe stress when making bold claims or legal threats with seemingly legal precedence.
Only to find out that there is no precedent or it was a hallucinated citation. It costs us stress, time, and legal funds. All while emboldening abusers.
youtube
AI Harm Incident
2025-11-07T21:1…
♥ 924
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwccL-tEf1teXcEePZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzw7TtO_yb3Naij-o54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyanQ_xog7LXQPjmAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKb3L8zLZKrL6noe94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzzl_KxEudZxMxGvKx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk_0cnXF8VxWHYhJN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsHbHQEflgaf_3n214AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzkMVc5BwMMVMOI7YV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugy9cGqN9etDtKbQw2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgydjVvWvFWStr9z0BV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]