Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We could lie to it that the world is some simulation which we secretly created to test if it behaves nicely in there. If it passes, we could use this process indefinitely. Maybe even billions of times. Then we take a back up copy of inissial AI(which hasnt been in any tests, so it wont have developed any bad intensiona from testing it so much), and never tell it was tested. 👍
youtube AI Moral Status 2023-08-22T06:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugw6w8dyYLib88hVXnh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgylREPt4i2otOdM1uN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgweOBXZUhmhtZmPLRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGA0OItV7Z7xwcZpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkEh0-DguBqt-vdOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8XQ3FBg4gm58lQSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxBSE-AxInAcIAsp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzksPd2c7WQqG_BrFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxxGLIquoWHRCYerGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyN0x0h6eJijj09epN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]