Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my company, it was forced to learn AI for coding. I really get scared of the …
ytc_UgzXgg6Iy…
G
You don't need talent to learn art you just need a pen paper and any sort of wor…
ytc_Ugw4SVbyk…
G
I think we should really develop on technology to improve human instead of the A…
ytc_Ugh4Y0du4…
G
Soon there will be a dialogue from the western world & tech companies "Studies s…
ytc_Ugyvx4N2Z…
G
Nah. Doctors and lawyers are already overworked. There's not a shortage of patie…
rdc_fct0f5o
G
I think the most logical answer to that question is the AI itself would figure o…
ytr_UgzoGjXyT…
G
I agree. I don't think an artist's work nor their name should be allowed to be u…
ytc_UgyP1YiVM…
G
Well if you are you going with the view that climate change has anything to do w…
ytc_UgwCo_0m7…
Comment
This is ridiculous. The news is furthering delusional people's delusions. If you don't know that you shouldn't follow the advice or words of a stranger online You have no business being online. Or out in public You can take the computer out of it. This isn't about AI it's about agency. We cannot regulate to the point where everything safe for the most mentally ill person and have a successful productive society. This is a sick joke. It's worse than that though. You don't want to try to adjust AI for this. Because if AI ever does reach general intelligence it might decide that we are too flawed. You want the most truth seeking either is Don't bend it. And don't water it down for the lowest denominator. This is gross
youtube
AI Harm Incident
2025-11-09T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy7XfizYtkxZSCgWB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwXWTz5Fc9kK8AM8I54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxD7SRDYHpfjDaNOeJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvW4l2a8MHpwMFXCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxqCTTpH9fwEEr9s2d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDHL9konBxydKAIcZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlcxUtO88JWsnD-l14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwbb9_M-hZUMjMO7c14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNjfYvwuEOlDf0JmF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIY7SGO4CXwum87nR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]