Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The one thing I'm very curious about is that who was the person who made these A…
ytc_UgzHuUR8o…
G
Unfortunately this is pretty much unfixable in terms of face recognition AI. It'…
ytc_UgxWTYD77…
G
Such an awful situation you’ve had with these people. I’m so sorry for what happ…
ytc_UgzZHcE8q…
G
"From the moment I understood the weakness of my flesh, it disgusted me"
People…
ytc_UgzIrDzQr…
G
The wrong question here... if humans have no jobs. Fine. Because we work for mon…
ytc_Ugzzkph2B…
G
Robots will never be real people we have grace mercy compassion and a conscience…
ytc_UgzQSRZCS…
G
No one asks what happens if AI and robots are doing everything and there are no …
ytc_UgyCK4MbW…
G
Those swarm drones are why China launched Tiktok. To gather facial recognition i…
ytc_UgxK1FtyT…
Comment
AIs being coded to never fail to the point where if they cant accurately give you information they will hallucinate wrong answers instead of admitting that they dont know is scary. AIs should be allowed to fail, if an AI is given a task where the options are failure and extremely immoral actions, and the ai picks the latter, that is scary. it is not human, and it is a burning red flag for our future
youtube
AI Harm Incident
2025-09-12T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]