Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI, or whatever, is going to ruin society, but it's all spearheaded on the backs…
ytc_UgzdWx8DR…
G
SHOW ME HOW: I am starting a print on demand online store. I would like to use A…
ytc_UgwBdAv8p…
G
chatgpt can be so annoying to converse with. in a group conversation chatgpt wil…
ytc_Ugym8UaJZ…
G
how about self driving cars ONLY for the "last mile", you use public transportat…
ytr_UgxjvXXbI…
G
0:49 Someone shouldake AI deepfake of Asmogold reactions to some crazy shit and …
ytc_UgyLLnZxt…
G
One Big glitch... Why is the super AI even using a keyboard, and why is it looki…
ytc_UgzJPsbZU…
G
I think we're still safe if it can get companies into legal issues trying to mak…
ytc_Ugyidxz0w…
G
4:07 eventually they won’t need her, because they gonna assigns a agent to do he…
ytc_UgyeJfh1M…
Comment
About the AI ... when you ask an AI a "personal" question, as if "they" ever did a thing, it can say no, because each session is a new instance of the AI, that is not associated with any other sessions, so the AI you're speaking to today, is NOT the same AI you speak to tomorrow, especially if the sessions are interrupted. Just because ONE instance of ChatGPT said a thing, that doesn't mean that every instance of it everywhere is aware of that, it's not a hive-mind.
youtube
AI Harm Incident
2026-04-26T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwJCcqft119854L4C94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwWG3FyvH8iGBHHeZZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyEy5dvX7NCwWAZEDp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxsceN4IQABEBQGhzd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzLZcouNWtv8vpZc-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxVgOAQLW3_BVgs1A94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugziz0CGQO4JtJawQQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugw7s3asa6QNQ0L7mQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwWFsZK7r_QVeuGzbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugzhgtg525b7Bd_ame54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]