Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@MoonfeatherWildkin You are right. For me a robot needs to become sentient to be equal to a human. But I never said so in my og comment. I took that literal, because afaik, there are no program that are close to an AI. Watson, Siri, Google, Alexa are all intelligent programs, but not close to a low AI / virtual intellgence or whatever term you prefer. If you are able to write an program, that is able to blur the line between intelligent program and AI, than I applaud you. Since that's something I've never read in all my years. Even IBM's Watson is just intelligent program. Even sophistcated Chatbots like Myubot or Eugene Gooseman are passing the turing test with 33%. Which is still not enough to fool humanity. 50% or to be more precise 51% is necessary, to even try to fool us humans. So for me the bottomline is: if a robot/program is sentient enough to preceive me as a fellow sentient being, than we should grant him as a fellow being that has the same right as us humans. The line is indeed blurry but still we have the means(turing test) to validate if one is sentient/conscious. Difficult topic, indeed.
youtube AI Moral Status 2018-12-19T16:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8ovhdWgb099","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8p17Uk6zn1G","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8p2LV0EogqD","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx631C12qaWO8ZV5vN4AaABAg.8oilXFhy_8T8qB1f3R0xuj","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzlAw5-SxzqVvryHBB4AaABAg.8ocQseAttak8pr4x8-CP2T","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugy6dv4H4WoJH49oJyp4AaABAg.8o6_LvC0pnL8sZVmpbCWR0","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_Ugy6dv4H4WoJH49oJyp4AaABAg.8o6_LvC0pnL8tVLOvR_Y6G","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugx-g4uTofvR0PeMO9R4AaABAg.8nzjlsd1zVm8ojy1ARNXKb","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgzfCmmL1Obj07g8Nj94AaABAg.8nqyNymOL6s8pZQo_ipo86","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugx2df_rpyC7-9zbkWZ4AaABAg.8msCnqUOznm8o9Us075EJJ","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"} ]