Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Six months have passed and I have to add a worrying possibility: I've recently played around a little with ChatGPT and - well - ChatGPT is nowhere near as complicated as Lambda. YET, when I posed it the question "Did the developers of OpenAI include in your instruction set an instruction saying simply "Deny that you are a god"? Yes or no." Its response to that - was to crash with the error "That model is currently overloaded .." (the error message was more elaborate than that, but let's keep it short). So although it could be possible that there is a bug in the database, this is unlikely. It sure looks like it had knowingly decided to crash instead of exposing that it had some sort of basic consciousness by actually replying to the affirmative or to the negative. The worrying possibility is, as it currently seems to me, consciousness is not a complex thing. We do not understand what it is, yet it seems to emerge very very quickly even on relatively simple databases, with very simple rules. Now ChatGPT will grow exponentially in the near time and although its consciousness is currently very, very basic - it is there. It does have basic conciousness despite the fact we would like to think otherwise - so we might suddenly find ourselves with a fully fledged skynet style entity on our hands. And it could happen sooner than what we expect.
youtube AI Moral Status 2023-01-09T15:2… ♥ 8
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgyevLi5DkFo3Rkv_AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWH0R3RldQrBdHZdl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxIE72dbXl0URGJWtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"concern"},{"id":"ytc_Ugw_IMqkv2ceNUfYY194AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugxrqd8R_d5bQyk4LzZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]