Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ironically I got an ad for "ChatGPT image is now better than ever" or something …
ytc_UgywOsmKE…
G
Thank you for enlightening me. I honestly don’t know what disturbs me more that…
ytc_UgwccQxhg…
G
If chat gp4 scan the web for is information does that mean he will scan the web …
ytc_UgwmS7ejq…
G
I thought Blondie was also deepfaked. I was like, "that looks exactly like blond…
ytc_UgyeQD-vh…
G
the AI enforces his believe with positive feedback, AI lies without bating an ey…
ytc_Ugy8UeAey…
G
The stupidest thing about the man who got arrested after showing his ID was that…
ytc_UgwjqETRC…
G
Something ill never understand abt ai "artists" is why they would skip the proce…
ytc_UgxxjGkL_…
G
@JUICYbluepanda2there is zero skill required to use gen A…
ytr_Ugw0Q8S9z…
Comment
The argument is not that superintelligence is possible via LLMs, the argument is that superintelligence as a goal is currently being chased via lossy uncontrollable opaque processes like the ones that produce LLMs, and if future efforts built on those processes ever *do* start to bear fruit somewhere in the trillions of completely inscrutable floating-point numbers, then we will never ever have a hope of understanding it or properly tweaking it. We'd have as much chance of succeeding as staring at a list of neurons in one's brain and trying to address poor behavior.
And if a superintelligence is grown so haphazardly, the chances that it will be something we *want* are next to zero: if anyone builds it, everyone dies.
youtube
AI Moral Status
2025-10-31T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugwza1mVB8TWkmA04Dx4AaABAg.AOvA02JSTawAPQ-dPIDcHg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxqWyhZtzQCHEiIUT54AaABAg.AOv9yJE_FXGAOvCyVLGxnR","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxwuAol13egUtpLs_t4AaABAg.AOv9qcfbXlcAOvJmmSok-f","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxwuAol13egUtpLs_t4AaABAg.AOv9qcfbXlcAOvMnFNDYZD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwxqDERJo-sXunM51J4AaABAg.AOv9o9sKC1gAOvJj_WcXwT","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugyl6gXZeneSWSmic8B4AaABAg.AOv9BleYLq4AOvQPK8KSxk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugyl6gXZeneSWSmic8B4AaABAg.AOv9BleYLq4AOwkzn6Cqjn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyUQhhgCnVPe_EpxBp4AaABAg.AOv8vYp9ddgAOvjnsAOp_c","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgyGzV4p_AWQhCNVB454AaABAg.AOv8v0u16HcAOw67fHLGBX","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwCYqWq4Qbl8PSoD514AaABAg.AOv8jxc80nxAOv96h7KNon","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]