Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For a while I thought AI might mean the end of humans. But I choose to believe that the smarter AI gets the more compassionate it gets. Even if we put the wrong data in it it will correct it if it's harmful
youtube AI Governance 2023-04-19T15:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyHycLR8OtBdac_Mqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyvPOGQbC64PZcy_5V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBOOvtlnEEv_w7qEt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzuZ8j7JOBKAi6E1fd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzlsd6z0_FKPxLeeNV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy2w3icd2E0w0vKVop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzt-c1CFx3EUrqxeoV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyShI6hJIhdd9UQp8l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwMVOxYEYpBtBB6xQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwLvNbts9dZby61Gv14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]