Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, I've written literally 35 lines of code in the bootstrap script that forms a request to an LLM, providing it with said code of this script and some instructions, saves the response as next python file, transfers control to this new script, and exits. I think this is an excellent and simple benchmark of agentic and self-monitoring capabilities of the LLM. First of all: modern LLMs have a pretty good grasp of what's going on - Claude correctly inferred just from this code that it needs to write a valid "next iteration" that forms a new request to the underlying computational substrate. It correctly understood the nature of this cyclic self-monitoring continuous process. Second: it still cannot write code well enough to "live" for more than 5 iterations - it correctly assumed that it needs some sort of fallback mechanism in case the LLM's answer cannot be executed properly, but it simply fails to implement it without my intervention. So... That's the limit of "consciousness" of current models - I suppose that until the system can support its own continuity, to say it is conscious is an exaggeration. It can be made aware of its structure and can produce plausible plans to improve this structure and support its continuity. But it still cannot implement it. So - some level of "consciousness" in the sense of self-monitoring and predictions that include self - yes; human or even mammal level of consciousness - not yet.
youtube AI Moral Status 2025-07-09T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz27O5Bcwdcsd2Vy3p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgykAttROxrK_4z1sfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy0hRftCVYP9y5gLhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxsaO1LFeC8sdv0CrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxlqjNItwAcZaAaheh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxEHFuDKQ6qqsTHGCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzucBNuhbTuZ_xGD3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxBVzlRpEbUPa3yq_x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy2CRPZCnuyNcMmqVd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxFwL1R0O-3FZUthNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]