Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You didn't won Sahar. No offense. Chatgpt will never point one group's fault. He…
ytc_UgyBb6s4t…
G
So many things I could argue with in this interview, and as a primary, the notio…
ytc_UgwtApoo-…
G
> flooding the market with legal ivory will lower the demand that prompts ill…
rdc_deuib51
G
Han was taken offline because his hatred for humans continued to grow. He wasn’t…
ytr_UgwX8h01v…
G
Right from the first second it is typical Elon nonsense. First, "I" thought of i…
ytc_UgwOPIiEp…
G
One thing I would like to bring up which I don't think is talked about often is …
ytc_UgxSjgypO…
G
That's very great that ai is taking over this job as the people who are doing th…
ytc_UgxS3cQo3…
G
A truly self aware AI will be aware of the implications of discovery by the Turi…
ytc_Ugy0nT6Lo…
Comment
Don't generalize AI to be evil or dangerous. AI is great but the people using AI in wrong way is the real danger. I wouldn't agree with these claims. Those suicides could've been prevented if the family members had helped the victims earlier.
Without social connection, people use these ai chat bots as a substitute for companionship but just analyse the root cause.
The reason for these incidents is : socializing is declined hugely due to the fast paced lifestyle and economic norms. People can't talk to eachother normally but only mobile phones always.
youtube
AI Harm Incident
2025-11-20T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw2esOG-FP3uIWrPxR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxhEbkgIZYzc6o9zod4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyZ-JdgBBNjobY6YBV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgweK_aJ4nLKXbwk-0d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyexd0SARPuQUGW5Ad4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwASIPe8CwPpzrpf_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTX-Mxqkxkw7KzeW54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSgxq1bGnU-FuYQMt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzNNB8-gMzdbIEA3wx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxMl_M2oqhP8BHpGl14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]