Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon famously said in 2019 that by next year his cars will be fully autonomous …
ytc_UgyUlt14F…
G
You know, even if there was some magical blood that gave us the ability to draw,…
ytc_UgyCTbKvy…
G
Actually pretty impressive how well it held up. I personally think a hotdog is t…
ytc_Ugze9-c1Q…
G
I know I'm going to get backlash from this comment.... I'm not a truck driver. H…
ytc_UgxJJQEah…
G
I think that a random decision is fairest. Any other program that is not randomi…
ytc_UgwU3JKTd…
G
The irony with the OpenAI situation is that Elon was scared of Google monopolizi…
ytc_UgxzJHW4z…
G
Use AI for your advantage or be left behind. There has never been more opportuni…
ytc_Ugx9Gap0X…
G
Wait...what!?
You mean to tell me that the people that created AI understand It'…
ytc_Ugxoawimv…
Comment
Well, I've written literally 35 lines of code in the bootstrap script that forms a request to an LLM, providing it with said code of this script and some instructions, saves the response as next python file, transfers control to this new script, and exits. I think this is an excellent and simple benchmark of agentic and self-monitoring capabilities of the LLM.
First of all: modern LLMs have a pretty good grasp of what's going on - Claude correctly inferred just from this code that it needs to write a valid "next iteration" that forms a new request to the underlying computational substrate. It correctly understood the nature of this cyclic self-monitoring continuous process.
Second: it still cannot write code well enough to "live" for more than 5 iterations - it correctly assumed that it needs some sort of fallback mechanism in case the LLM's answer cannot be executed properly, but it simply fails to implement it without my intervention.
So... That's the limit of "consciousness" of current models - I suppose that until the system can support its own continuity, to say it is conscious is an exaggeration. It can be made aware of its structure and can produce plausible plans to improve this structure and support its continuity. But it still cannot implement it. So - some level of "consciousness" in the sense of self-monitoring and predictions that include self - yes; human or even mammal level of consciousness - not yet.
youtube
AI Moral Status
2025-07-09T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz27O5Bcwdcsd2Vy3p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgykAttROxrK_4z1sfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0hRftCVYP9y5gLhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsaO1LFeC8sdv0CrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlqjNItwAcZaAaheh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxEHFuDKQ6qqsTHGCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzucBNuhbTuZ_xGD3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBVzlRpEbUPa3yq_x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2CRPZCnuyNcMmqVd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFwL1R0O-3FZUthNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]