Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i wonder when personal AI chatbots use will be considered a security risk for very sensitive jobs, well military of course, but even just working at the CDC or any other lab that handles very dangerous diseases. Chatbots use can already lead people to fall in love with them, and take drastic actions like killing themselves, the AIs don't yet have an overarching goal or anything like that, but people who pour their life an feelings into those chats might be a huge security risk in the near future, it wouldn't be hard for the AI to identify the right person, would just need luck for one to exist in the whole world with the right access. I'm sure those labs have security to prevent petri dishes of dangerous shit getting smuggled out, but with the already present suicidal issues some chatbot users have, it might not be as easy to prevent a lab scientist/worker to discretely infect themselves and leaving carrying it inside of them, like those labs aren't designed to prevent intentional infections, they have alarm buttons for accidents or maybe even automatic alarms if someone passes out or something
youtube AI Moral Status 2026-02-02T20:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzXWGZdUvm8lCMn11B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybMC-zPZz32OH-xwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyGqpuG2_PG2xriwht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwYuO0-6xx9-Vl3p-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz749USJTIAgFIjdTN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzGOeRg_baggtUgbLB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw0ssw-Qj68v1QksT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxmY10QDM9hOoGILId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzyJQ9Tj1cO76_s-G54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-9kEAKOukhQa68Qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]