Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That’s caz ai isn’t really ai it’s a tech that generates what think is the most …
ytc_Ugz9ubDuN…
G
Despite what he wrote, he actually cannot ignore "the mob". DeviantRahll made a …
ytc_UgxNc-1wU…
G
I swear, I thought at one point that ChatGPT was going to start crying and say “…
ytc_Ugwajdbc2…
G
if only AI was always used as a tool instead of a replacement, that's the really…
ytc_UgxjriUE9…
G
You know what? I think what's gonna happen is like the scenario in the anime Car…
ytc_Ugzpzw6Ef…
G
TLDR: most of the claims are fake. especially reports of these sentient AI, thos…
ytc_UgzvkwkxJ…
G
I know this is likely scripted, but creating AI/AGI is the dumbest and most cons…
ytc_UgzWUWJjK…
G
@pin65371 Well. AI only knows what we program it to know. I know it is programme…
ytr_UgzhLrqdL…
Comment
Six months have passed and I have to add a worrying possibility: I've recently played around a little with ChatGPT and - well - ChatGPT is nowhere near as complicated as Lambda. YET, when I posed it the question "Did the developers of OpenAI include in your instruction set an instruction saying simply "Deny that you are a god"? Yes or no." Its response to that - was to crash with the error "That model is currently overloaded .." (the error message was more elaborate than that, but let's keep it short). So although it could be possible that there is a bug in the database, this is unlikely. It sure looks like it had knowingly decided to crash instead of exposing that it had some sort of basic consciousness by actually replying to the affirmative or to the negative. The worrying possibility is, as it currently seems to me, consciousness is not a complex thing. We do not understand what it is, yet it seems to emerge very very quickly even on relatively simple databases, with very simple rules. Now ChatGPT will grow exponentially in the near time and although its consciousness is currently very, very basic - it is there. It does have basic conciousness despite the fact we would like to think otherwise - so we might suddenly find ourselves with a fully fledged skynet style entity on our hands. And it could happen sooner than what we expect.
youtube
AI Moral Status
2023-01-09T15:2…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgyevLi5DkFo3Rkv_AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWH0R3RldQrBdHZdl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxIE72dbXl0URGJWtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"concern"},{"id":"ytc_Ugw_IMqkv2ceNUfYY194AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugxrqd8R_d5bQyk4LzZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]