Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Safety can be programmed in. But an intelligent force can always go against its programming. So safety in AI will require constant vigilance. Humans are lousy at constant vigilance. So we will need AI that is divided like the branches of government, and balances of power. Some AI will have to be constantly monitoring other AI seeking to keep each from getting too much power. Even this is not hopeful. Our own system of balance of power has become so corrupted that there is little balance left. In a super-intelligent AI, that corruption will come even faster, through secret negotiations among AI, than in our own. Secretly plotting behind our backs and deceiving us all the way.
youtube AI Governance 2025-06-16T15:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwQ8eSwBsC_CtVA9H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUOEkZlek8P1GptZd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyYKPhC4bIVezzek3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwPRrLfbYjU65rkC2h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9rZxdbM76lfihsht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8uTBXqsg_MjAv3h54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzIhNVeP1DlCY4-L014AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwDQ-MhaxCh4OO07v54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyE-6M-aIdYY6WnSaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwxVe07_-a_RVhK_QN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]