Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
true tho
But at the same time, you have to prompt the AI again and again until …
ytc_UgwHKkrSH…
G
Yeah no algorithm can account for the chaos of daily life 😂😂 no way we're in a s…
ytc_Ugwmx0Z_q…
G
imagine the day someone lets ai decide whether to fire nukes back if icbms are “…
ytc_UgyQUyThM…
G
And classically nobody brings up the biggest problem with you know machines taki…
ytc_UgxJFR9tN…
G
What's amazing to me is that it appears obvious that A.I. can bring about a utop…
ytc_Ugy2eZDlK…
G
AI is pretty good most of the time. If you go over stuff multiple time in multip…
ytr_UgzZiJdZ8…
G
the idea of someone who doesn't know that it's smudge, not smug, trying to lectu…
ytc_Ugxob4bTC…
G
This was probably the best conversation on AI I have heard so far -- viewing it …
ytc_UgzoPYlJ1…
Comment
Thank you for doing this interview, a very important and much needed conversation. I love Karen and the work she does. I know it might sound extreme but I am against AI, I don’t think it should be invented and be available for the general public to use it. Maybe it will benefit medical field and able to do certain tasks that human cannot do to some extend, but it also create so many negative and damages to human, resources and the world that affect our livelihood. I hope more people can be aware of it and we can do something about it before it ends us.
youtube
2026-04-14T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzl83LDkeLVpP8YR0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxNF_niwKpmWKs3-Q14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-SD9Cx8XTITCwZI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzj4keWoHPJ63Um5kV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6pvE-pQ08V5vM0xV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaCoZXATT7Mo0t8G14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_cdOcjIUT7nnJrWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxU83TAL7y5IR708SZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUI8xpiwuMMb0DzZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWO0Ug0luozL0-Wux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]