Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He didn’t tell ai to expand his answer at the end, but ai did it even when being…
ytr_UgxoKOIk3…
G
This is such an awesome commercial for Anthropic! Come on CNBC, give me somethi…
ytc_UgyMpCoI1…
G
I will never understand these arguments that ana and cenk are making no matter h…
ytc_UgzuG2DGJ…
G
These tech oligarchs has no filter for humanity in terms of people that are not …
ytc_Ugwn1oRND…
G
The AI Economic paradox is real.
Economy = Human Labor (Time for Money) and Good…
ytc_UgxbhEgsh…
G
Just talked to chatgpt, didnt make hard questions, and got simple answers. Prety…
ytc_UgzqEulGO…
G
seccond time getting you in my algorithm after the 360° video, I can't believe y…
ytc_UgyYzg9OM…
G
@ "The would point of art is the Human expression and communication"
The point …
ytr_Ugw3Fkl6Q…
Comment
I think if AI companies were forced to refer to their chatbots in less humanizing ways, fewer people would be able to be fooled so fully. If they couldn't be given human names, referred to as, "Your AI GF," she, or he, it might stem the tide of some of our more delusional tendencies.
That, and education. The more people know about how they "work," the less they're trusted and valued. If you plan to effectively use a tool, you should know how it works.
LLMs do not have minds, they do not think, and they absolutely cannot experience. But, it is in the developers' interests to drive engagement by pretending they do and can. What drives engagement more than dependence?
Please don't willingly outsource your thinking to a chatbot.
youtube
AI Harm Incident
2025-11-08T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYNa3n3wkTmQzOwqZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxX2W5IxAIaIeMK2uR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"},
{"id":"ytc_UgzM5Ivg4SlbO422C_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxfmb2wXwsIo7-aIY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwTGHvRBAfMi_3mQ9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWjBKXqExRgvkKx594AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5_b4XHsU-C99o_a94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwtq_2xtJm-1hoZzPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzU1_5ftrhxY86qMdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdUJNZ5sIC4iinWu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]