Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If ai will be smart enough to solve goldbach's conjecture (hypothetically), what is stopping us from asking it to help us understand how consciousness can be described and what makes us conscious, even if it's not conscious itself? It might be that Ai can articulate why we are conscious and it isn't, much better than we could ever hope to.
youtube AI Moral Status 2023-08-20T20:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyT5i_a58y4WRBHedB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2mynRM8sQPVNKdx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzL8z-I5awF5dPscOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuMan507WxuZbwLTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxPh7cdY6K4dSQP5rl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgysVPV51E5cOgYl-6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy9WOIdoXxKHBUDDA54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwoDfoYQQxPgHJ2hhh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGLivzxlCJPHiTyTZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXVu0CJ0sxTCETJGN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}]