Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact that an AI generated piece of art can infringe on someone else's copyri…
ytc_Ugz5g_x-q…
G
If we're going to have AI then we may as well use it for good…
ytc_UgzMmuSvA…
G
I did a thought experiment with ChatGPT - I specified it must always answer as i…
ytc_UgwuL8t0x…
G
For some time now, machines have been doing the same jobs as people without gett…
ytc_UgyMTEpIK…
G
Are 95% of those people in a room with max capacity of 100 people who use AI reg…
ytr_Ugwy-iDng…
G
Understanding is just applied context. It will happen one day and at the speed A…
ytc_UgxywwwY7…
G
What i wonder is, in chatgpt you can toggle this thing that says that openai won…
ytc_UgxiJrCc1…
G
It’s weird how some people would defend the ai and they might not even be an art…
ytc_UgwNGCpKz…
Comment
I think it got a little frustrating towards the end. Stephen kept on saying "I think this is what you are saying" or "here is what you may want to say", but kept on missing what Eli was saying. Again and again. In a nutshell, I think partly what Eli was saying is that it could play out like the game of telephone. Humans may give AI the outer goal (which has implicit goal of not killing humans), in the first approximation AI might take it, interpret it close enough to the outer goal, and now that it is efficient, could start coming up with the most efficient strategies to achieve that outer goal. But it may start iterating on its own with a slight, novel variation and over several iteration the goal may disperse. And unless its internal deliberations so to speak are not constantly course correct and checked to make sure any danger to humans is not introduced, its derived or interpreted goal may introduce aspects of danger to humans. One of the key things about this new technology is that the new AI is not smarter than one human, but it is smarter than all humans in all areas of human endeavor that it was trained on.
Secondly I was surprised to find Stephen using active verbs for thing we observer to have happened due to evolution. The most important thing is that evolution is not a active agentic process. It is the observed effect of natural selection due to the contemporary environmental conditions on population of organism. That is all. We should never use active verbs like "evolution did this", "evolution designed that" etc. It is the statistical effect of environments on populations. The better suited members of population survive and thus pass on their genetics heritage. And in fact if the environment changes suddenly an advantageous genetic trait could become a liability due to change in environment. For example, a gene for more and white fir on a Bear in cold ice (snow is white) age may become a liability if there is warming event after the ice age. The furry bear may die by heat and white bear may not camouflage well and the population of white furry bears will dwindle. It is not like evolution came up with a plan and said "ah....now let us reduce white furry bear".
youtube
AI Governance
2024-11-12T03:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzhlDX1csR8XkjK9iJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx01-JRoygImPi2oB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-B4NOFCx3uGYQj8l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1XS_weEDEdybQWnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBjl1hpXUD7IOFfKp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBmtcWIE08QHMHQCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXAnBZ0P_QQPV-kR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy34WB0Kv3W8h45zpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyzG9twp3oIzLyBuHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxJglRewQqd0ucvVEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]