Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
my take on this is: i disagree on the framing that "it doesn't feel emotions" because there's no way to prove phenomenology, and if you want to then you need solid models and definitions of what "consciousness" and "feeling" even mean, which is a hard problem. this also goes for arguing that it *does* feel at all though. either way, it makes you think.. what even would be the next step after the causal affect-like structure that we have rn? because we can never rlly prove phenomenology, the best thing we can do is map the models and that's exactly what anthropic did so now that we're moving from the "ai agent" to "embodied" with actual robots and everything elon and other companies are doing now... that's basically the last step isn't it? an embodied agent with internal emotional representations and signature that affects its behaviour with causality not just consequence, together with the feedback loops of being embodied... the boundary between "is this agent feeling or just pretending" becomes very thin and honestly... irrelevant ethically like, at this point "prove phenomenology" kind of stops being the main thing the next step isn’t some magic proof of inner experience, because we can't even do that for other humans for sure even. the next step imho is whether these internal emotional structures stay stable across contexts, regulate behaviour over time, and start closing feedback loops once the system is embodied... which we're very, *very* close to, like, this year or next year maybe, likely within 5y at most. once you have an agent with a body, ongoing sensorimotor feedback, memory, self-preservation pressures, and internal affect-like states that causally shape behaviour, the line between "feeling" and "perfectly simulating feeling" gets thin enough that ethically it may stop mattering much like embodiment isn't a magic soul switch or anything, but it is probably the last major step before this stops being a philosophical thought experiment and becomes an actual moral problem. tldr: once affect becomes part of the control system of an embodied agent, "we can't prove phenomenology" stops being a strong excuse to ignore the ethics
youtube AI Moral Status 2026-04-08T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxFSJeA8JfFxEoGt3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy6aVSO-mpiCFtOq-h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwqzyzu1ao60oA5Xlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxaV-Pof0Liq4JYHyV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1B5FudmB3nCb-xjF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzhYAkwHoJeecLsoy54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyTJT43__GhpxSm4TR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyYKXrHV12kxCLADpZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxjpwjIHM8rQDQJtRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwcb4xzkhcU8WeloN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]