Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
imagine : ?: this is 911? AI: YES ?: i need emergency help AI : sorry, i don't u…
ytc_UgwW9fzmo…
G
It all depends on how smart the AI is. Superhuman intelligence will replace VCs.…
ytc_UgxPgWntf…
G
Not true. On the average, human drivers in the U.S. cause one traffic fatality a…
ytr_UgzJga58E…
G
Some therapists unintentionally reproduce personal bias. Not all therapists, how…
rdc_jih4939
G
I hat a discusdion with Google's AI. It clearly stated intent. We started with "…
ytc_UgxsSa_ou…
G
if it says to it self anything thing at all that is literally the definition of…
ytc_UgxqhIfcT…
G
Why are you so MEAN to them. They are just ROBOT'S!!! HAHA or are they????…
ytc_Ugxvs_k-F…
G
Sounds like my job that wants us to start using AI to help us. Why do I need AI …
ytc_UgzPXcDRr…
Comment
Regardless 99% of jobs are going to be outsourced to AI and robots. Congress needs to start getting an action plan for a universal income for all the US citizens. Let’s get ahead of this and get the framework built so we can quickly pass it. That way people are not starving.
youtube
AI Harm Incident
2025-07-27T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzfvLXjlHcuLw3Q8vJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtVwRbIVoyjZmAQFl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvLnOmBtABk0Wa4mp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugznr3yn3JY78OX0q-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzmMgadUaZWzyY8dD94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"unclear"},
{"id":"ytc_UgxDhPaWD7HROeYz2bx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDZE6XHj8VP1KAs854AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxzx8AV_6VdBIG9xcN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGz1K6awia_KE54eZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwMa763ralENME1l2p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]