Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's why they will build killer robot body guards and try to imprison the mass…
ytr_UgxKxidaT…
G
Why in the hell would you turn nuclear weapons over to AI when humanity has alre…
ytc_UgziDPyV3…
G
Consumer spending makes up 70% of US gross domestic product.
Should too many peo…
ytc_Ugzh3RlrP…
G
The disproportionate coverage of the dangers of AI is like the review fallacy: t…
ytc_Ugzzohi1W…
G
It is the same with any tool. You can monkey around in photoshop without actuall…
ytc_UgyGBIM2b…
G
i got an ad on this video about an ai that helps you come up with ai prompts…
ytc_UgwOhXQJr…
G
I totally agree. You are so wise me on your years and you’ve worked with AI. I t…
ytc_UgyKYm16Q…
G
This is so dumb. Sure we should fear the consequences of AI and how it's used et…
ytc_UgysY8fj1…
Comment
Unfortunately just like the nuclear race. Nations are not going to stop. Is like waiting for Germany to be first at making nuclear weapons. Waiting for China or any other nations to make AGI first is not an option. There is a better chance for an American to be aligned with an American AI. Look around the world. The wars, wealth and those with the most power. We’ve being prompted and controlled by other humans since the beginning of time. I don’t think people in war torne countries care it’s not an AI doing it. Most would argue that at least AI gives them access to knowledge, intelligence and skills that are currently only accessible to the few. The rich and powerful weaponized and monopolize access to the best scientists, doctors, lawyers and resource; they use to prompt and perform tasks that only they can afford and have access to. Worst case scenario; most of humanity would just be trading masters. Ask many people in disadvantaged countries and communities and it’s not such of a bad thing; is it?
youtube
AI Harm Incident
2025-07-27T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyLM-c2vezhrdQtkJJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYNXDtiDIixnwn_6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb_bCbzpBiV29lf_R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUoOkfc3s-c6UGffp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxPys5Q9PUYlkHCFfR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyighSx8qlE7Fu7anR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFeGq6RJvnKPm1Wm94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_Ugy0Km4NAtCw6--A8ed4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMhYfBzPMn-Ep0-sl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzbl43-LASWfrXXNa54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]