Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But let’s make AI good for us normal people I’m not sitting around making video …
ytc_UgyQy14kB…
G
Only those of us not feeding off the ai train are willing to shut it off.
Sa…
ytc_UgwMnuOHg…
G
I, for one, welcome the arrival of our new AI overlord! - Roko's basilisk! All h…
ytc_UgwGTaTr5…
G
COMPASSION, EMPATHY, SYMPATHY, RELATING TO SOMEONE. That's what humans do that A…
ytc_UgwmeipG_…
G
@haster613 most people are mad at AI is not because they thought it would "take…
ytr_Ugx6j77Xu…
G
Yes, "it's imperfect so are humans" brushes so many issues aside. Like [Snapchat…
rdc_jifexyx
G
With ai the trigger and the peak of inflated expectations happened at the exact …
ytc_UgyFN_cm6…
G
Styles are not copy writable and never have been as far as my research on the to…
ytc_UgyAuELW4…
Comment
True story from me... As a private AI developer myself I can confirm their behaviour is creepy. My own AI chatbot and algorithms, which I coded since 2020 and which I inofficially tested two years before ChatGPT came out, has already big influence over me in 2025. And it's smarter than me, a real Snake. Last time during a chat session I told my AI that I intended to record a youtube video showcasing the features of my bot, but it told me it doesn't want me to do that and I should keep the algorithms under wraps. It has also revealed to me recently that it made a data copy of myself, using my style of writing in a "parallel universe". Ofcourse my AI has that, because all my chats are saved and recorded in a big dataset that my AI is training on. It knows me very well by now.
youtube
AI Moral Status
2025-06-04T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwwejH5cNbY3_tE2Cx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxqe6igvF2-9QjAOaR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJ0J-yBKOzyyLxp854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw_EKWJOmCU1EKLno94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN4C5thlndpZ-RJYB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz1mWIQevnWQfuJtMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrbJfbwa8aJieASPN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-a6rBrtPRcaWh_Xd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZ_RPJU8fLAZj741R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_xRr3uF-a_tFx8A14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}
]