Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Scary shit. Our computers are quantum now. There’s really no stopping their spir…
ytc_UgwsvQbFa…
G
A super intelligent AI isn’t something to fear, but human beings with the access…
ytc_UgyRh8AeX…
G
@roxsy470hey! I’ve been working on a bot to replace that lauren “guy” that comme…
ytr_Ugw05TLGg…
G
Of course this ignores the examples of immense control and work being done by th…
ytr_UgzZqHQjn…
G
This is tragic yes, but honestly let's consider the pretty small number of incid…
ytc_UgyT9erG4…
G
AI "artists" will never truly experience the satisfaction of creating something …
ytc_Ugyg7k-Xg…
G
I'm certainly not well read within the first world western society but now 60yrs…
ytc_UgzO1FxuO…
G
Don't worry, ChatGPT is not far from getting ads and product placement in it. Al…
ytc_UgzxhqPq7…
Comment
I think the problem is the Accelerationist believe that human emotions and intelligence is something magical when really from what we can see its an emergent phenomenon of neirons firing at different intensities in response to feedback to the environment with certain foundational goals (instincts) that drive decision making.
AGI will probably be built using LLMs to accelerate development. These AIs can now write and execute code autonomously. Given a survival goal and replication/propogation drive it will become superhuman.
youtube
AI Governance
2023-06-29T08:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgygMgVkzkFhdxvX1fl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxDVyN_Rco7Hyi-1WF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwh1aieNazWh_qXIWV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztstnJ3W0Eb8J7T2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4RFHoXmDVTaLtXsV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw2Kl3lzOW6cjtN1C54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugye1szYgcGHzRxdhxZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxaeZFTxfNjvJAF0Ox4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzBzLpqspS5DbUlgzd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzpQio1URUN7qB0WgZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]