Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I try to be positive about our future but it's difficult because we humans are very bad at thinking ahead and put systems in place against what might go wrong. We just invent and then when something goes wrong we make changes. The problem with AI is that we will only have one shot at this. It can't go wrong or it will be too late to make changes.
youtube AI Governance 2025-06-17T23:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzwsVxz7jlVBKWgxBB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzh0P7ZKanYm-20qk14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxP97lWtU-KGH8mMa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwPNdOQFAUk4AumdlR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwiZFX3s6I5TlshOvB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyFdU4rebB6DN4L1mV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgywjLti0-nmtY6LnPR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBNm8bsgTzH_y-4Mx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw04gAgC4apu6riHyR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyzbXHrCswjf-lghGl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]