Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Trust me, I know for a Fact ( don't ask me how, because I cannot reveal how I k…
ytc_UgwPNKe3s…
G
Who is gonna stop it. You? I will give my AI full access to internet and it alre…
ytr_UgyYh8eGI…
G
Already A I has sacrificed contract in which all commerce runs and and all who g…
ytc_Ugyb3eYr8…
G
Just see how realistic it is, AI generated photos were easily identifiable with …
rdc_ohyn5m5
G
The “gorilla problem” — is popular in AI safety circles. It comes from the idea …
ytc_UgytX7Bir…
G
I saw her on a show about AI robots. The Chinese robots are more life like but …
ytc_Ugz_ewtgd…
G
These robots don’t have AI, they can only do EXACTLY what they were programmed t…
ytc_UgyEUdLbn…
G
I'm an IT recruiter. I interviewed someone recently who bragged about saving 20%…
ytc_Ugyg6bxcL…
Comment
i think the issue with ai ist the goal is too broad and doesn't include specifics like "be helpful to the user"
what if the user wants an atom bomb then you need to change it:
"be helpful to the user, without harming anyone"
what if the user asks if he should kill someone trying to kill him?
you see the issue becomes how much reason can you put into a prompt and data, because that's what they lack, reason.
maybe they should say "be reasonable"
but then that will also include what "reasonable" means on reddit.
youtube
AI Moral Status
2025-12-15T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy08cRqfdWrfiPvMfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5XwfLhOgBo9WKKuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwADlEM6OFCHxRLhCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxn60-oigQPBiW8Umx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxarHxDLb0wO3Oi_cV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9lrwYkfafZVwn8th4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCG0MF8m37sHu0Nil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzb4gUvOBUau98PxIJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbqkVZKD_jtAdABWp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmnPLy-8m8qRGaBUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]