Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can't help envisioning AI taking ultimate control, once it is able to access ban…
ytc_UgyGZtIgh…
G
I had phase 2 of this conversation with my CTO yesterday. Phase 1 was “we need t…
rdc_oabwn8b
G
its fine. ai art wont last long due to a lot of reasons but even if it isn't it …
ytr_Ugwq1t9m_…
G
They are not rules lady man, they are laws! You don't have a choice to not follo…
ytc_UgzgI9I29…
G
Acting like an emotionless robot was really just Zuck playing the long game seei…
rdc_oh1ia23
G
Have you actually seen most of the production code? It's horribly badly written.…
ytc_UgwzGuwBJ…
G
Headline a year from now :-
"AI company that paid reddit 60million a year for t…
rdc_kr52yok
G
I'm gonna risk upsetting people, as someone who went to art school and have tran…
ytc_Ugzrx4TgT…
Comment
This video really got me thinking. A teenager getting pulled this deeply into a chatbot relationship… it’s chilling. 😳 Wishing everyone watching good health and calm days. ✨ But the big question is: should AI be regulated more strictly to prevent tragedies like this?
youtube
AI Harm Incident
2025-12-08T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxArMWaky3j2rxaYTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYyzKHRAV0r9n_zgt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzj6e7Pf6SsIHks93J4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-GvrEi7Fjpb7nOU54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMzx63r44OgIAvC5Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyvT_V8HLgaZTssQJ54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRbJb55iG52f10pmd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxIchdtfu7GAdwX7mR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzj8bJSYO8uqx7X2Vt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzruIe3Vx8S3AXWMJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]