Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Considering humanities track record, does this really come a shock to anyone? I mean, if be shocked if we collectively went, "Yeah... so this A.I. thing could annihilate our species. Maybe we should just put it on the back burner for a few more decades, and make sure we do it right, and safely, so it helps humanity instead of destroying it." Now THAT would be shocking. This? Par for course.
youtube AI Governance 2025-09-05T02:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx26TBmclHoXDpDjAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzWbx6T3nz-DBiW_XR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyg3C57M1kdWfYEOSZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw5DU5H2MQtVdyuucl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxRQ0o8tpmwz0OMLiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGI3hkmXdvmPggFhl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzLuubv7d52_v-YwIx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwiAdD6YqI3X9xdj1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwwwVCAKAg-ooT57ch4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw-i_olgeXUYkmzXp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]