Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When people say "some AI models refused to shut down," it's usually misunderstoo…
ytc_UgyEzjHSG…
G
AI and health care, joke of the century. Fat accumulation is the only disease, …
ytc_Ugyhfzxft…
G
Man can't tell difference between AI and real woman? No wonder they don't think …
ytc_UgwdUAhXe…
G
I have a question: I am an author and an artist, however I'm really not good at …
ytc_UgzkJy_9j…
G
AI can be a tool for research and a thought partner, but you need to write the b…
ytc_UgxXWckqE…
G
Ai is wrong a lot of the time since it pushes towards the quickest solution, if …
ytc_UgznjhorV…
G
Ai and tech is literally killing us and the environment. is always "follow the …
ytc_UgxfgG9TY…
G
@disorderandregression9278 Well it does take actual skill to make something amaz…
ytr_UgwhgGLrh…
Comment
The problem is that it isn't a easily definable line that indicates when an AI is smart enough to take things from us. We don't know where that line is, and we won't know until we've already crossed it. Is it when it becomes as smart as a mouse, or a dog, or a dolphin, or a human?
And there is incentive to keep making the AI better and better, no way a company would stop developing AI unless the government told them to slam on the brakes. And even then, what's to stop a group of people interdependently doing it anyway.
How do you even measure it's intelligence? Can it even be measured using the same metrics as organic life? (And for that matter, how do you even measure the intelligence of organic life) For all we know we have to devise completely separate and unique methods for measuring an AIs intelligence.
youtube
AI Moral Status
2019-09-04T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxwpEk9vKe9NUA-ZQp4AaABAg.904DGqcR7LL94bR4GSfotO","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwS-unypey-mQW8Tfh4AaABAg.9-O9fftlkQF94q4xyHR9h_","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgzGEUJLGlkIL-s04Tt4AaABAg.8zUjiWrAmFM96sYJsdnVXU","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578zTmfp8-h83","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578za4jR2mq-X","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578za9LT-CDfV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578zcxRScYP3P","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgxlLv0rMuNplN-nVHR4AaABAg.8zT_Xegkn6U8zTnEN94W_S","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxXyh-p6pDPGItPDER4AaABAg.8zRQsIFYrAU8zTwXqfTc5K","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_Ugy4JPgBw-FkT83RQRp4AaABAg.8z1OytC5V9C8z8PSKOgGak","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]