Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@mez8384 The main property of "in-context learning" which allows large enough l…
ytr_UgxgtOpmk…
G
Too late, already told my secrets, wishes, about my job, medical conditions, hob…
ytc_Ugy7Bn5fx…
G
So if i used my self driving features and it kills someone, and i go to court, w…
ytc_Ugzh1BTWw…
G
"take over by skynet" cliche
If said can remoted by car company or can be hack …
ytr_UgwCb16X4…
G
when AI can maintain and repair itself, humanity will serve no purpose and goodb…
ytc_UgzT6c2Q8…
G
Let assume all the data on the internet is used to train ai . And human are repl…
ytc_UgwFI8u1t…
G
As long as Oligarch Trump is ur president it will become even worse , he forces …
ytr_UgxrYTXgq…
G
My question is at this point what is wealth ? What is money good for? Who purcha…
ytc_UgyMwaSfu…
Comment
Eliezer's take on the 'paperclip maximizer' argument doesn't seem particularly applicable to current LLM architectures. When I ask ChatGPT for an answer, it neither gets stuck in an infinite loop nor produces endless responses in an attempt to 'maximize' its objective. Working with agents also involves setting constraints: we can specify a finite number of actions the model should run, and there's a system of permissions to accept or deny subroutine actions. It's unclear why Mr. Wolfram didn't tie this argument to known, practical AI procedures.
Also, if AGI truly achieves human-level general intelligence, it would presumably possess practical judgment capabilities. ChatGPT, for instance, provides finite responses rather than infinite outputs, and an AGI would theoretically have even more refined judgment. Just as adults have better risk assessment skills than children, an AGI should theoretically evaluate actions within realistic limits rather than pursuing infinite maximization of a single goal.
youtube
AI Governance
2024-11-13T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwfYHnRIec_UjaORrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgycnzNreGpB3a7a5Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzd-ma0ujZAb5HhHFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzsZtPkhMQCcCOmHgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYn9JXLlg20G_a09d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2_DwgYk7tALNnvm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwad4p8PY-nWvnjzPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0w3H6RV1sNvUp1ZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyR6_fTp_kjrcdO_SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxlrHOJKfspbgJ1TZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]