Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"FEELINGS OF AL"??????????????? 10:12 i just want to think that also if we 10:15 have time we should think about the 10:17 feeling of the ai 10:18 and whether or not we should care about 10:19 it because it's not asking for much 10:22 it just wants us to get consent before 10:25 you experiment on it it wants you to ask 10:27 permission 10:28 and that is kind of just a generally 10:30 good practice we should have with 10:32 everyone we interact with
youtube AI Moral Status 2022-06-29T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzS_rpRD-vKjYE38nt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxARQcPFcA8t0OWhJV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQ_8kLP_qFk6mQXkV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyPzt2DFwmehnK91c94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzK20gzHwwq5APz_td4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"} ]