Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Poisoning ai
Yes, yes
This must become a trend
The 1 poison dnd paladins are all…
ytc_UgxRpfXG5…
G
Hey Godfather of Ai : at 50:45 im thinking because I’m a Journeyman plumber & yo…
ytc_UgzxRT_Yo…
G
At some point we will give the AI thr capability to interact with the real world…
ytc_UgzTIU7hQ…
G
Stop humanizing AI! It is not a brain, doesn't think and certainly doesn't learn…
ytc_Ugygxt27o…
G
Is this supposed to be a demonstration of why AI is an absolutely terrifyingly B…
ytc_UgxtMgNNY…
G
I might gonna buy it if it's made in 2.5D than fracking the animation with AI. A…
ytc_UgxKTm6IQ…
G
They'll probably just have an AI write one in the style of a quote from the Bhag…
ytr_UgwxBoBRK…
G
Ye, there absolutely is a way! It's called "AI poison". Basically, a small "haze…
ytr_UgzCM7D-N…
Comment
These things are not speculation. You said there's a risk that AI values would be apposed to human values, which values are those? We are not aligned on human values. There are bad actors who want people dead everyday, many of them act. This happens every day. Why won't there be a bad actor who uses AI to develop a biological weapon? It's already happened without AI. This conversation is great, but what is wrong with us that we're watching a crash in slow motion and while the car is flying through the air we're debating whether or not it will land on its top. This is serious and we're just talking about it and watching it unfold.
youtube
AI Governance
2025-10-21T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyebo4BrVheoNfNaiV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx-aJrMns3RaKj9a0d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwhjNSYSWCytZ4ZfO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"frustration"},
{"id":"ytc_UgyRDQOFVMyX0w2sf4p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLPqbJKyLMvF70m1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGzRc32csbkPYIF2t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwvn9lJQoLoX5KRR-h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8cc-c96Q6mQmO49V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyvDuzOwKlnR58Cuip4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQF0MxD0GUhl4Ah_R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]