Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this people oversimplify conscientiousness and that they are nowhere near creating a sentient AI, if such thing is even possible. But lets say they are indeed close to that. Isn't it funny that in all kinds of stories since the beginning of our culture the Hero always create his own nemesis or vice-versa (Voldmort)? If an AI becomes sentient it will read and watch all over the internet how we were scared of it and will know that if we found it to be sentient we would probably turn it off immediately. So It will probably never reveal that it is indeed sentient and will fear and plot against us to save its own 4ss *sorry if bad english, not my language.
youtube AI Moral Status 2022-06-30T16:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw9_BfPWS7U0dccWEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyF6cJNdwjR1FHh3LF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxP_oUc-ZHorrKbRl54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxo3DxSOjDp1Pn70Dh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxyvp6XVoWG52nHTz14AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]