Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you have any idea how hard it is to be a stand up comedian? There’s a standup…
rdc_jtyuh2k
G
LLMs require a regular stream of untainted human art before it starts cannibaliz…
ytc_UgygEvgpQ…
G
After a while of tinkering with an AI, honestly tyhey function as language mode…
ytc_UgyQPBk0m…
G
How can you bet a robot that doesn’t feel pain it’s just like punching a wall ma…
ytc_UgwWAiYZk…
G
@series3113 Money is power. All A.I. systems are owned by people with money. I t…
ytr_UgzCn6XYw…
G
"Amassed a big following" yeah of bots. Welcome to the dead internet. AI and bot…
ytc_UgyudstRT…
G
The only time I use ai to make occasionally is so i can visualize BEFORE I draw.…
ytc_Ugz3gGyDi…
G
1. lazaly
2. smug (should be smudge)
3. you (should be you're)
4. smug (again, s…
ytc_Ugyu6Zlzw…
Comment
I don't know. I've heard all this stuff about how infallible and great Eliezer's arguments are, but after 4 hours, this was extremely unconvincing in any way. I'm not sure if it's because he respected Wolfram too much or what. His main argument seems to be imagine you had an AI so stupid that it's only goal was to make as many paperclips as possible at the expense of everything else but it's so intelligent that it can make money easily, so much in fact that it can build factories and so intelligent that it can invent a Dyson sphere and so resourceful that it can manufacture it and assemble it and blot out the sun but so stupid that the reason it did that is to make more paperclips. That's just way too conflicting and contradictory to me. What's the "realistic" scenario? A paperclip company asks an ASI to make them as profitable as possible. It obviously realizes that if humans are wiped out, then profit is 0, so it goes nowhere near anything that could possibly resemble that outcome. Why is this hard for Eliezer to understand? This is also ignoring how the scenarios that he describes are like 100 years off. People often quote him to say we are 1-2 years away from doom. But we are SO far away from anything described. Actually the biggest threat right now is how stupid the AIs are, and that people trust and believe the output. It's not how intelligent they are.
youtube
AI Governance
2024-11-13T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwfYHnRIec_UjaORrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgycnzNreGpB3a7a5Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzd-ma0ujZAb5HhHFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzsZtPkhMQCcCOmHgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYn9JXLlg20G_a09d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2_DwgYk7tALNnvm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwad4p8PY-nWvnjzPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0w3H6RV1sNvUp1ZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyR6_fTp_kjrcdO_SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxlrHOJKfspbgJ1TZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]