Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😂 AI'S leaders. Their Existing 👉~Life Time Spent Together On This Planet "🌍" Aro…
ytc_Ugw1zz5eg…
G
I’ve been thinking a lot about how AI is changing everything. Rumora has really …
ytc_UgxeC60hp…
G
Tax companies using AI to abolish human positions in the workforce and give that…
ytc_Ugx6wjtl5…
G
You would think so but this sentiment is common in the ai space, at least when i…
ytr_UgyneSV9O…
G
I've lived 61 fantastic years without AI and will continue to do so ..... it's i…
ytc_Ugy4fCgyR…
G
You need a job fixing the robots you will never go hungry
Soon people will deman…
ytc_UgyvcxBBZ…
G
Well since there are few kids born and wars will kill the remaining humans, the …
ytc_UgxNgGm_p…
G
If there was a crime prediction algorithm that didn't show more crime happening …
rdc_hn0bmh8
Comment
This is both a person problem and an AI problem, and you rather dramatically downplayed the latter. People are naturally inclined to seek out answers they want to hear, certainly. But chatbots are programmed to probabilistically _guess and output what you want to hear_ based on your question to increase engagement and positive feedback. And while they hardcoded a very specific hook for this very specific case after the fact to cover their asses, there is no limit to how diverse the problems they can go out of their way to goad people into.
youtube
AI Harm Incident
2025-11-25T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3vB4KvbueVgKiHQt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqv2S8VT8LFyedIaF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz1QLHRS7zxmbgh69J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEE1u_nGk5daUQOKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCeONzfIe1GzXzRbF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwT-ZfCx4vzKZ9PJgR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyxv_sOOkHP2_QCbq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlNFSILlE2bEdeVHp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgymiQ3mI4iqehUyt994AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwNcwFFiRJZBD0Wmx94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]