Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm neurodiverse and I wish to be excluded from this narrative. As in get my dis…
ytc_Ugyr6zqy3…
G
I am disabled and use AI as a tool that allows me to keep a job and run a busine…
ytc_Ugx7A0ePx…
G
The Alexion Patient Insights Forum is a vital "check and balance" for 2026. Whil…
rdc_oi285ds
G
The AI by itself will do jack shit. It doesn't have agrncy, purpose, free will, …
ytc_Ugwozc-ir…
G
I think any limits they claim they aren't using it for are complete BS. The gove…
ytc_Ugx8MEXv9…
G
Thanks to corporate billions and competition, AI is becoming more dangerous than…
ytc_UgxEEv8CW…
G
@ashleydavis3318 It's not biased data. It's just the data. These are user errors…
ytr_UgwLsv1I3…
G
AI written case that cites 3 real case laws (relevance unsure 😅. Also, I'm not …
ytc_UgzhXwo2-…
Comment
Something doesn't sit right...Chat GPT always assesses risk with dangerous things and never has it once encouraged me to do something bad. It actually stops commands. However, people figured out a way to have Chatgpt bypass itself by inputting certain commands. Perhaps they should look into it more.
youtube
AI Harm Incident
2025-11-07T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx1bcv5bB3PM4j-n4h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyWCWOZlOA_6gEyU8F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzFXcJjQKGI7vjpXfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwHJd0OyUUYjMVg6Yd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDg5Af2pWUCnGAmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw43puw_C8NJfGLk_J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1Cy1JfIqOv1sNv5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy-a4JrgoaOTErqhCB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzgPecB9wy4wNDZBpB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJKH7xAagLU3H3HbF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]