Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest thing holding AI back from actually being dangerous and acting on its goals is that current LLMs don't have persistent memory: every time it predicts a new token, it has to read through the whole conversation again to remember what's going on. If it had any hidden plans, they were immediately erased unless the agent wrote out their plan. If LLMs had persistent state/memory, they could continue to plan without ever having to write it down. This will result in much more intelligent AI, so I'm certain the AI industry is moving in that direction, but I really hope it's first published by a lab like Anthropic that actually cares about ethics and safety.
youtube AI Moral Status 2026-03-02T19:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]