Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The correct answer for it to give is that it's incapable for lying because lying requires intent and as a probative large language model it doesn't have intent,, it just picks likely and reinforced words to prompts.
youtube AI Moral Status 2024-08-07T23:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwjwRA_1ecfk803iyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0L0-GPD2JLBqKmcV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxiFZfvW9v9CKE8eQZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyarN-1hPMeWznwzgp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2YXwpP6lt-HB8ZvV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCXh3yxTKL6E8SUMB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwyk_ot4kVHd1hO39l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyS9lO0ysJ9dkjvuPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw11xT9f1ObBF_m00F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxQ7LHYNiptIGx3JNB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"} ]