Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i use ai generated images for inspiration nd sometimes some silly pictures. as a…
ytc_UgxaBRS_K…
G
@dachshunddoggo2764 Musk & Bezos have supplied hundreds of thousands of jobs aro…
ytr_UgyVdUNOR…
G
Are you sure about that? If you are referring to LLM models, then they aren’t ev…
ytc_UgyZIgOMz…
G
Company A led by AI 1 sends money to Company B led by AI 2 who then sends money …
ytc_UgyD8k5cQ…
G
they are doing AI like this because they just cant not do it anymore, it's liter…
ytc_Ugwu93GPJ…
G
0:51 "I sucked at it."
So did I. But I got better. How? You guessed it! By prac…
ytc_UgwXnUy9B…
G
@Tomi-always-Tomi That's my point. Who is going to be able to afford a fully aut…
ytr_UgwH2Ucbh…
G
@onlyguitar1001 Are you saying that AI and robotics will change human nature? …
ytr_UgzPyroQN…
Comment
Nope. Regular ChatGPT does this. There was an article earlier this year with researchers warning that it overlooks common warning signs and could even encourage suicidal behavior: https://www.sfgate.com/tech/article/stanford-researchers-chatgpt-bad-therapist-20383990.php
reddit
AI Harm Incident
1756221953.0
♥ 48
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nas8uw5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_nas2pmo","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_natz30g","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_natwvdy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_narwpwb","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]