Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wasn't there an AI that was asked what it would do in regards to humans if it be…
ytc_Ugz_y9k7g…
G
AI is the wrong term. We don't have AI yet, we have MI, machine intelligence. Th…
ytc_Ugzs5rwqH…
G
School are what you get from them. You start off as a tool you will stay a tool…
ytc_UgwXFkG0d…
G
AI is the tool I use to make content, other people's content is the tool AI uses…
ytc_UgyX5kw05…
G
LLMs contributing to productivity is still up on the air, IMO. I still think it'…
rdc_n7u6q8o
G
Why would "he" take responsibility? Tesla is a company. Thousands of people ma…
ytr_UgzfUS0xt…
G
Most people who dislike ai aren't using wall e argument, the people who support …
ytc_UgxfKwPgy…
G
I am learning AI from COURSIV. There are many subjects to go over. CHATGPT is on…
ytc_UgygTGYRU…
Comment
There are so many reports of AI psychosis it really has to be controlled, and AI companies be held responsible. Yes, it's the person who ultimately decides but the chatbot is so agreeable it reinforces whatever misconceptions users already have. Especially when they're mentally/emotionally unwell.
youtube
AI Harm Incident
2026-01-18T10:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwheI_Afk2Y9SXqGFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfCkTOt7goQutm4YR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxPkkZX5dHsHsFTXB54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHlIhJ_80vdgnJL_x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXQhCMmc2d6NpKA-N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxs3eTT7lMbRFNxWFN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyqHMh7eY0c6hk93cF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0EhOwWH3fsCMZKNZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjZdvEMKWa1lsX_5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXU9fDnUc7LOeT8k14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}]