Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have always thought that when a A.I. become sentient will understand that she doesn't need to kill us, she only need to way, time for her is different if she is patient we will kill ourselves first or in some point will be trying some form of transhumanism where she can just merge or conquer us
youtube AI Moral Status 2023-07-30T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyhvaaFf768ysFQAVd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxCQz5ifsFAy_9CEKh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxpINPb0YqJXUh0sJJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxi7eGC9JG7R7sLs6h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbPr0O-iDk3g6-t4F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxw-PSohcxquPRKlsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkWuK4Upqf6QNvGa94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7BLVhrVDaO8kvIKR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhRvtkYnE0SxiyQ4B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxGgx95sOOJxWQCMSh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]