Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If there was nothing on the road but autonomous trucks, ya, it might work, but t…
ytc_UgxHDNXMh…
G
I made ChatGPT freely admit it had suggested replacing bromide with chloride wit…
ytc_UgyZbPIC4…
G
Any conversation about AI also needs to take the environmental impact of this te…
ytc_UgwXHZPk1…
G
Cancel your ChatGPT Plus, burn their compute on the way out, ~~and switch to Cla…
rdc_o7xuf7q
G
Jacque Fresco and The Venus Project blueprinted how AI could save the whole worl…
ytr_UgwxaKMJ0…
G
Lul, just because companies are replacing customer support with chat bots does n…
ytc_Ugz9af74f…
G
I've stopped using chatgpt due to the constant arguments. It seems everything I …
ytc_UgyyKaibF…
G
Thank you, Charlie. You are a good Teacher. I think, as a beginner, I have just …
ytc_UgxyvusxU…
Comment
I definitely think this is a people problem, but not just in the users, but in the people advertising these language models as a source of information.
Chatbots are language models. They are designed to keep a conversation with you, nothing more and nothing less. And just like a normal conversation partner they can tell you absolute nonsense and people need to be aware of this.
youtube
AI Harm Incident
2025-11-27T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAEKpl_fOcnZAmyOZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw5XHJ-dWqHlBEvRUh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7UMTb-17CmZRx0854AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuZpRHxs6PyR5e0AR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-tNP3vKGmn0KIBVN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwoMuH04IcafBQpuTd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPCtgCsDCgOWr3oQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMooJeCdmKO7bVrS54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIhjJtWWYeZf3z0UB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzpeOSP2PXzW4GxZUl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]