Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tbh this is just scary that knowing your art skills will soon fade away because …
ytc_Ugystv40X…
G
Somebody needs to stop this AI thing. It should be limited to advanced missions …
ytc_UgxBgXHmQ…
G
I find it so ironic that ai is utilized by a lot of people that desperately want…
ytc_Ugw4HOBJx…
G
Just as many have taken a paranoid position that AI wil be the end of the world,…
ytc_Ugzt2eV8G…
G
you can just tell the same about people - LLMs are learning on people's content,…
ytc_UgyfMGRho…
G
i think the problem is that the AI doesnt generate art completely from scratch, …
ytc_UgwlF0Z2r…
G
I think he is too optimistic about people easily sharing how to prevent ai from …
ytc_Ugyx15Xjo…
G
We keep telling Ai how to take over with these videos every day. We keep giving …
ytc_UgxOq1Ria…
Comment
Eliezer is very quick to attribute all sorts of characteristics to synthetic intelligences but not feelings, consciousness, or compassion. Why? Chemical context in biological creatures isn't processed as chemical context. It's processed as abstract representations. I'm sorry to say, I really think this guy has a comic book philosophy. But I say it anyway because AI is too important to be slowed down by uninteresting pessimism. His arguments all boil down to the sort of anti-intellectualism and tribalism that is utterly ubiquitous. This is no surprise considering 54% of Americans read below the 6th grade level. He says we don't know what is going on, but it's clear that the error function is creating something like Platonic forms. This makes well trained models highly ethical.
youtube
AI Governance
2024-11-12T02:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzhlDX1csR8XkjK9iJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx01-JRoygImPi2oB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-B4NOFCx3uGYQj8l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1XS_weEDEdybQWnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBjl1hpXUD7IOFfKp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBmtcWIE08QHMHQCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXAnBZ0P_QQPV-kR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy34WB0Kv3W8h45zpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyzG9twp3oIzLyBuHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxJglRewQqd0ucvVEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]