Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the key to AI safety is making AI accept that biologic life, but especia…
ytc_UgwqG8HIx…
G
Count down to THEM being victims ...the media, libs, Dems won't let them be corr…
rdc_oi0rnzp
G
Not to nitpick, but AI:s are usually not built, instead they absorb enormous amo…
ytr_Ugyos_9Fh…
G
me:are u a robot or a human
them speaking at the same time:we are robotic humas…
ytc_UgyJvMQqn…
G
i honestly agree. if i find meaning in a piece of art and i connect with it, onl…
ytr_UgzTMhGye…
G
Actually Japan has already started building AI robots for men 300k the starting …
ytc_UgwD_M3xM…
G
Robots on Cosmo really, Scientist and tech should put more effort into real huma…
ytc_UgyL0c6RL…
G
"You dont know whats coming"
Ai is regressing and making no money. Of course I k…
ytc_UgxVJ-3Km…
Comment
I’m getting really tired of this argument that “the way AI takes the ideas and transforms it into something else is no different than how a human creates their original ideas so it is fair use and not stealing”. Wrong. Because AI is NOT HUMAN. We are not the same as AI and will never be and should never be. So stop comparing ourselves to the way AI works it’s a machine, we are an organism it’s not that hard to understand the thought process here and human emotions at play. AI can be a great thing. I’m not against it in a sense of it helping us but this is a clear red flag for how it can begin to dehumanize us. Like really, we are going to give all the credit, control, and benefit to the AI. Has humanity lost its balls? Does anybody else see that we are slowly losing control. I don’t want to believe it but we could be on a path of where we are the machines and the AI is the human
youtube
2025-07-20T08:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzk5_YKBhSAS94MFBR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxl1RkuB14w4SoJH_V4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwjnrDpTG5CVgvtacB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQPasxQWv-WKIoacZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyEJaagmU0av1eIv594AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzoj3_A_R_xnsTOp9d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybuA7WhscaNRJpROx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2JsnNAK9csYDc3at4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwU9766-vTtwBKbztB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy8EoyK8CTp2WZX7TR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]