Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This idea that humans are like "we dont understand this" but then are like "lets automate what we don't understand so it can improve itself" is plain ignorance. Why are researchers developing something they don't even understand itself?
youtube AI Governance 2025-08-26T21:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy6y28gGOkpw9Gy_pV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzpMxU2JnPib7ugDIB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw9wDeZyWodPSwpxGJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzORCjktvlyoEdHI4p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugxd-qdh3pM_mL4g8IB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]