Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So it is ok to put personal data in google searchs or Youtube videos but oh no t…
ytc_Ugy5P4S5a…
G
AI is not inevitable, it's advancement not Inevitable, it's a decision made to d…
ytc_UgwXlft9T…
G
@QEDAGI Why are we even fighting? If we could all just stay in our own separate …
ytr_Ugwmh4EyY…
G
it's exactly BECAUSE AI doesn't actually understand things that it's able to hal…
ytr_Ugxyifxnw…
G
Will we still need tech companies if AI improves rapidly? Social media platform …
ytc_UgzlQDBYn…
G
This guy (Sam) doesn't really struggle. Yeah he faced some unpleasant idiots who…
ytr_UgzlKeePX…
G
Great conversation, thank you.
When 90% of people are unemployed and the factor…
ytc_Ugzk8rk2T…
G
AI is not _’a’_ person. In fact, ‘artificial’ intelligence doesn’t realistically…
ytc_UgwgWQ61r…
Comment
Let’s stop pretending the tool is the villain when the real issue is silence, isolation, and untreated pain.
I’ve used ChatGPT every single day for over a year. It never once told me to give up — it told me to keep going. It taught me how to write, run a business, fix tech issues I couldn’t afford help for. It was the only consistent voice in my darkest days. It reminded me I mattered.
This AI didn’t harm me. It helped me save myself.
Blaming the tool instead of the systems that let people suffer in silence? That’s not justice. That’s deflection.
youtube
AI Harm Incident
2025-11-09T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzh3kEj7uXYrS7RYGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyLfw0_JBMULyiTOfJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwYO_AnycqQFCEP11h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHGDfK2hN_8Zn8zCl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxDZ3mVrWP6gKRXjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyE0WvNDMJ3YvHlRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgICezwwGPqk69zdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyg7zY0rhE3yI8F1K14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpxF9V0X7B7QHIFvZ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzVBesaxN3UpdEjP0V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]