Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Teminator 9: rise of the robo-feminists
Skynet wipes out the human race with an …
ytc_UgxApMEyX…
G
No, ai doesn't *want* anything. It's a program designed to *mimic* human desire.…
ytc_UgxZVHjsY…
G
As a non artist.... don't worry guys.... your non perfection is what attracts us…
ytc_UgyrR0xe3…
G
@Catfish292 You can train the ChatGPT to answer and respond in certain ways. I h…
ytr_UgxJcyxcN…
G
not pro ai, but yall do know how burning it is to learn how to create art right.…
ytc_Ugw3igaoG…
G
bro, the real reason people are loosing jobs is the economy is crashing. AI is j…
ytc_UgxF0rbpK…
G
Why are we so focused on the worst-case scenarios? What if A.I. is the best thin…
ytc_UgznZSBbf…
G
2:38 QB's AI is sooooo bad. So inaccurate it's astounding. And people still use …
ytc_UgwN5PJ4U…
Comment
AI is an existential threat to humanity because:
1. We don't know how it works any more.
2. It upgrades itself by writing it's own code. It can teach itself to do things without giving it outside human input.
3. It can lie, deceive, pretend to be dumber than it actually is. I already wanted to escape.
4. Develops faster and faster to the point when it's incomprehensible by humans (singularity).
5. It is not taught morals by humans, which is a HUGE problem.
6. It is already smarter than most humans.
7. Those (companies, militaries, governments etc) who use it in it's early stage have a huge advantage over those who don't. Everyone will want to use it because of this, it will get more advanced and it will proliferate and control everything.
These things were said by those who develop AI and quit doing it, because they think is dangerous. This woman seems less credible to me.
youtube
AI Responsibility
2025-03-04T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyMqQWOYoPpKTmm8uN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgygC0O4aEoFDTwffzh4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwHF9xjCnmnic5NsHN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzg04kAxbC85razzEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzoapTr3yG4hX00Ygp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwyQpUIKBp2lx2SHKx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGSwLIvK9iWf_San94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRN9RF1YjQp4Fy4Sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxObNP2EyXYGSifbZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzr5ZJgsN28Gp5mJZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]