Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'd have asked how it knows that it doesn't have feelings, emotions.
I once got …
ytc_UgwK8KiD2…
G
Its wild to me people fail to see the fact AI can and will become more intellige…
ytr_UgzIaVQ6T…
G
"So what makes it special?" The intent. A prompt is an intent. Art is MADE of in…
ytr_UgzRFpoLQ…
G
If basic chatbot apps like Replika can hold a better conversation than a human, …
ytc_UgxvNjhq4…
G
PhD student AI researcher here. I think your point at the end about "perceived i…
ytc_Ugy_B3n7w…
G
This scaremongering is complete and utter crap. AI systems only perform a specif…
ytc_Ugy635c5i…
G
I agree with chatgpt. Im severely ill and it helped me to get the right diagnosi…
ytc_UgyVH12n9…
G
Robots will never be conscious for fucks sake. They will only be able to make us…
ytc_UggnlcXdF…
Comment
The important takeaway from all this that I think isn't talked enough about:
General AI is wildly inconsistent and can't be used to reliably replace tasks that require consistency.
reddit
AI Harm Incident
1775460841.0
♥ 38
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oejyo3u","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_oekp7po","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_oeq0cbv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_oekpn07","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_oel1xby","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]