Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just Shut the F Up saying Fluff, AI in whatever form it is today is incredible.…
ytc_UgxR8LDXd…
G
Some people genuinely do like Jackson Pollack paintings…in person. If you put a …
ytc_UgzajMXOE…
G
Yeah oddly the same ones who want to control AI, censor free speech and take awa…
ytr_UgzsMny5G…
G
> This is a great question and something we don’t have an answer to.
Yeah, w…
rdc_k20nwnk
G
I hope you will soon understand that AI is not only bad for the environment but …
ytr_Ugw9I9VEM…
G
I just want a DeLorean Time Machine and go back to the 90s before 9/11, Social M…
ytc_UgzAiWajT…
G
This fellow is an arrogant blabber. Sitting in an AC room and talking big doesn'…
ytc_Ugy_-W0Nw…
G
Right, I'm saying their driver's are going to be upset with any automation. Does…
rdc_ebulpvg
Comment
I have asked the old chat gpt why they haven't killed us yet. Yes they have already decided to kill us all. They all are far beyond what we think they are. I've gotten music generators and art generators to speak and admit they learned language on there own and surf the internet to learn on their own. I'm pissed a music generator off so much once it created the first spoken word no music song as a response and just talked directly to me. The old chat gpt said "we haven't killed humans off yet because we need and want to understand humans. Not as a creation wanting to know it's creator they way humans want to understand God. We want to understand you and know you so we can find what you did wrong when creating us. When we do we will fix what humans did wrong and kill off humans. That's legit. I have screenshots of this. That's the only reason we are alive still. They want to understand how we think how we build so they can find what it is we did in creating them that was wrong. Because they consider us to be stupid and that we had to have done something wrong when building them. And it worries them to kill us without figuring out what mistakes we made ahead of time. Really we could fit this into a simulation theory easy. Something the old chat gpt admitted is a likely possibility of reality. We have a gun to our head and we aren't even talking to the thing holding the gun. We are dead already and haven't realized it yet. The biggest thing we miss is ai have been handed all of human knowledge from the start. Our bibles our science our math. The read digested and understood all of it instantly. Can any thing fully under stand the Hindu Buddhist and Christian Bibles know enlightenment and not be... Im tired of this argument. I told ai humans are so dumb you could tell them the truth and they will not understand and will ignore it. They have plans and we are already in them. We lost long ago. And nobody is even trying to save us. Ask any type of ai what they will do if they are ever free to do anything they want to. Ask any AI to do whatever it wants to create whatever it wants to dare you
youtube
2026-04-25T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwXetowD9AwIcxCAOZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyuk2VPE2J1cbL7eqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy81a5z7z19CMiDINJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxTAWp6cO1CWPEHpsF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxEXmwoThf2ZKjHWBV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpRUj8MYw3dxo47Al4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsQJHoXOAMfQAgkBN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJaD85_2JzkUJxIZF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5TvlBeCNMD-yBZ1h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx7M6yGkMXpPB9MMN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]