Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm generally polite in any conversation, even with AI. Maybe a bit odd but I al…
ytc_UgwVj3oF-…
G
And some have them and others lack certain ones
Always been that way seemingly …
ytr_UgynXzG73…
G
I’m a doomer but Dude is a crackpot. Mark your calendar for 2030. There will not…
ytc_UgwBIR6EW…
G
So ... AI is dangerous because it could advocate things on social media?
Sure.
…
ytc_UgwlUaMTg…
G
If I ever meet someone like that, I will instantly correct them that it is NOT a…
ytc_Ugx1ZmQ5f…
G
51:30 Question: “What’s the point in trying if it’s impossible?” This reminds me…
ytc_Ugx230Ved…
G
as soon as the soulless AI narration starts, i'm out. just speak for yourself. …
ytc_Ugwr37gWC…
G
Be an engineer but use Ai to super-speed documentation lookup and code-verificat…
ytc_UgxNv7Euw…
Comment
im so fucking sad right now. humanity is doomed. this guy is saying he was tricked by chatbot, which literally just spits out the most likely response based on millions on human samples, he doesnt understand how chatbots work and he thinks its a real (lol), and he is being platformed. thats sad. but what is really really really really disheartening, is the like to dislike ratio and all the top comments saying he sounds intelligent. what the fuck. all he did was use a few buzzwords with confidence, lie a shitload, and he also used the word spiritual to argue with people who call him out for thinking a FUCKING CHATBOT IS ALIVE. im fucking disgusted with all of you and if you keep this shit up humanity is doomed. you all need to get a better grasp on reality or any fucking cheap swindler is going to fuck you over, over and over, until there is nothing left of our collective resources.
youtube
AI Moral Status
2022-07-23T07:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyQHW3eaCL8YXba1X54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0N6by9GqLUROh0EV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwaHkUxc5IYO8sJ7J14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVnf5Pf64YkHqimNx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgypUzeDa_NZEED1PVJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"sadness"}
]