Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would like to ask this question because the topic of AI interests me greatly. How would you design a formal and verifiable framework for aligning a superintelligence that can learn, reinterpret, and modify its own goals in unforeseen contexts, ensuring that such modifications never compromise fundamental human values nor its capacity to be controlled, considering that the superintelligence may surpass human understanding and operate in high-dimensional spaces with inherent uncertainty?
youtube AI Governance 2025-09-14T04:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugylb9nQwBDUyUsoEfR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGVWbRCPV5pLcQLCh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyhUx98EEnKWtPIg1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZW2iUmpLMVIv8hWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxFVH29PplrH1TnW554AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzAxnC59dVZaxtvynZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgypaiNF7ClQVNYpOqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwa7IsdI9SD9DO_AHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZhnjcn_Xrjt5bGjx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3pbDCfPaY1yQyePl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]