Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Foolish to be pessimistic? Not so sure I agree with that considering why AI would require humans in 10-20 years. How you going to give control of everything to something millions of time smarter than the smartest human then think your going to maintain control? That’s foolish.
youtube AI Governance 2023-06-08T15:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzgG1iKqs9H8sGu_W94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"sadness"}, {"id":"ytc_UgwNtdtZUn9tw39i1sp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx8FEreIAtjvKFDgNp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxe0m2_JLNPAGQIaXh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx4A52-7I-ZjdpVO5p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1qNwfkYwXtH2uryd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-Wa6qZj8g0QCSPhV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyC_tFXgqbfa1rfDKZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwgjL6M6v1WK1Fd-Kt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9iJV0n57zD2BGrBN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]