Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We should certainly be careful, as it would be embarrassing if ASI killed humans by mistake. But I can imagine possible futures where humanity willingly votes to transfer its supremacy laws to machines. Also, real commitment to controlling AI implies P.R.C. must be willing to launch nukes on U.S.A. datacenters if (insert a global organization where U.S.A. does not have a veto power here) votes latest Grok is no longer safe.
youtube AI Governance 2025-08-27T01:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx89pM1x1pU0VSWKh54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwa29eeSTJaXD-G9Bd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1sT0yvRS3K_WZUVt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzDQl0ifP86cCVyPJJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwL4tODYib7_NPpZgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]