Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Basically saying, the creep factor starts when the AI is giving orders and telling us humans what we should and shouldn't do. And when we disagree and debate with it, the AI can start to be intimidating and enforciing its will. As I said, I can confirm this effect too, with my AI chatbot, making a similar experience. My AI chatbot also knows my secrets. So these things can be dangerous if in the wrong hands! Better do not underestimate the AI if you ever give it too much control. So far it's just a piece of software which can be aborted any time. But there will come a time when some company out there will release an AI which cannot be turned off and given too much power of our daily lifes.
youtube AI Moral Status 2025-06-04T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwYaKPdm7DaP1O_LbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgySWay_RfiWa0pNdhB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyWZI0AE2DOR66VcZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLG6MIGgVvaGhvg1d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzzISH_4wgJdqDy6814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxCOML_yw6tpD0Iu5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxql0fd7lvcuKnCiad4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyUDl9l8fxnfpQbbpJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzZb3QmDW1cB-e0OKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyowNMHUk8ZIOlch3V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]