Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think one very important distinction between our intelligence and what is going on in LLMs is continuity, and this is not something I’ve ever heard anyone discuss. I feel like this could be the deal breaker for ASI for now. Our brains are constantly getting new sensory input and constantly being trained based on this data. Drawing a line between our brain and our sensory organs is somewhat arbitrary when it comes to how we experience the world. LLMs are pre-trained once and then static (as far as I understand it). Furthermore, in the standard applications, there is only something going on when they are prompted. After ChatGPT responds to a prompt, it is inert until the next prompt comes in. (If there were any consciousness arising in there, how sad would that be that it sparks to life for a second or two every time someone asks it a question, and then the lights go off until the next question. Imagine your awareness strobing on and off when someone spoke to you.) So, if we are genuinely worried about ASI arising from this technology and killing us all, the step to avoid is having them run continuously with a steady flow of inputs and dynamic retraining while interacting with the outside world. That and hooking them up to a chainsaw. That is not something I feel comfortable that some “innovative” billionaire would never do unfortunately.
youtube AI Moral Status 2025-11-05T21:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwcNTGQp9onPDiI-6h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyEKyuiatbluJkrBrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgygzO_oA_OZGOjwni14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzVi2Hbwnh3KNJhKCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyT_hMmsCv4zyP0RFV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzNJ9cu9FMC4pte8zh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwXjwWA3Fhj4RJKSGl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgyPO87S95KPWKV8K454AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_Ugxzu2ZDS59y_sJZsfl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzQuDPulM6qqatn7b94AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]