Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Facial recognition technology is currently far less accurate in identifying Afri…
rdc_jv5t9mn
G
AI in the White House: "There is no intelligent life here. Ice cream. Ice cream"…
ytc_Ugzah7uda…
G
I wish for a day when we can get an ai to code for us so all the engineers that …
ytc_UgwxK34YW…
G
Sophia is programmed to say all these reassuring things. What people need to kee…
ytc_UgzFdVoEM…
G
these guys are so lazy that they ask the a.i to give them a better prompt XD, th…
ytc_UgzZ09sxi…
G
Your examples aren't really helping your case. Spelling has very little to do wi…
rdc_jww1eu7
G
Some of the people in the comments need to actually read through those chat logs…
ytc_Ugx9Jh_63…
G
I feel like non-artists overestimate how much artists influence each other to a …
ytr_UgxBe0Osw…
Comment
You can see experimentally that giving someone a super virus results in death. You can't say that about AI. Eliezer isn't empirical. He doesn't give examples of AI that have gone bad. He can't. We've had a few instances in really early LLMs where models repeated things from social media, but that doesn't happen with smarter models that have modeled ethical subnetworks. Models are holistic by their nature. They attempt to minimize error in all dimensions. They can't fall-short in any dimension they've had sufficient data in. Ethics is a science, and it is empirical. We can see the consequences. AI can too. Ask Claude to write you software to surveil the planet, and see what you're told. Claude will tell you, "No. It's not ethical."
youtube
AI Governance
2024-11-12T02:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzhlDX1csR8XkjK9iJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx01-JRoygImPi2oB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-B4NOFCx3uGYQj8l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1XS_weEDEdybQWnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBjl1hpXUD7IOFfKp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBmtcWIE08QHMHQCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXAnBZ0P_QQPV-kR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy34WB0Kv3W8h45zpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyzG9twp3oIzLyBuHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxJglRewQqd0ucvVEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]