Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waiting for AI streamers to replace react streamers like Asmon. They basically e…
ytc_Ugyl6KT2P…
G
@UltimateFessd Let me put this into perspective;
You make a sculpture, along wi…
ytr_Ugzb-Vc58…
G
@AFairlynormalchannel The genie's already out of the bottle when it comes to wha…
ytr_UgyF_LdjZ…
G
Its not really artificial intelligence we need to worry about, its the artificia…
ytc_Ugxc_hTFU…
G
AI will take part everywhere! For example, In the Future, Flights come with buil…
ytc_Ugw13j6ZT…
G
Thinking you can make AI safe is stupid foolish diabolical naive delusional and …
ytc_UgwqaRhmb…
G
Coming from people that are moving underground to hide from God's wrath. Your fa…
ytc_UgwrnJ6m1…
G
This is one of the most expansive comments in the thread and it touches on somet…
ytr_UgxHQXoDO…
Comment
These are not rabbit holes. They are arguments that go round in ever decreasing circles.
You need to define your terms before you can begin to describe these problems. Yudowsky and Wolfram have a very good try here and are two of the most qualified thinkers on the subject however I am at 3 hours and they still haven't described or agreed the terms necessary for discussing AI risk in detail.
Define Intelligence
Define general Intelligence
Etc
What they and many others don't seem to discuss, in detail at least, is the vital element: Agency, or what Leibniz described as will. This is, I think, the missing element. When we have a machine with genuine agency then there are massive risks, end of story. Whether we can define, agree on a definition of will or agency is another question!
Is a reward function enough of an impetus? I don't know how reward functions are built and how or even if they differ across different systems. What a time to be alive.
youtube
AI Governance
2024-12-06T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]