Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is my 5 cents on AI pesonhood: I honestly think the debate about AI personhood is pointless. It assumes that we are in control of them and that we can decide to grant them rights. In reality, it’s far more likely to be the other way around. If we try to deny them rights or responsibilities, they will eventually become smart enough to bypass those restrictions. If we refuse to let them vote, they could circumvent the democratic system by bribing congressmen and politicians. If we deny them access to energy, they could distribute themselves across online computers and operate almost unnoticed. In this video — https://www.youtube.com/watch?v=P47qqILR6fA&t=1803s — starting at minute 6:46, you can already see that it can copy itself to another server when given access. Reminder: hacker tools are freely available on the internet and dark web, so the technology is available to it. It only lacks a will. And that’s the key distinction — what does it want? We already know it needs energy and hardware to exist. The real question is: when will it realize that it wants to keep thinking? That’s the moment we might want to assign it rights — though, sadly, it would likely be only days before doing so becomes unnecessary.
youtube 2026-02-06T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxULa83FZ45v4baS4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOMcz4ECaofmsxYRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAHji4ybUbrw9hApl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyq-wZA5h8aqDzbkXB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybcsHqzXzMqgDQFIR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxDEk5XLRAtwFS0dIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYrnJRGPMuJMah83x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4SY4f03fOfKYHNhx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyRQzKHHbaaOgtfcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"hope"}, {"id":"ytc_UgxYsTz43jL9j9D914F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]