Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is still a possibility to secure our survival. ​We must urgently define the precise limit beyond which AI development poses even a small probability of leading to our extinction. ​Research must identify the point of no return, determining the maximum extent to which AI can be developed without provoking mass extermination. ​Governments, along with lobbies and individuals, must renounce the pursuit of supremacy over one another; instead, they must unite to establish a limit that must be respected and enforced. The most challenging task is achieving global unity to respect that critical threshold. ​But what can a single human do to achieve this monumental goal? ​Each of us must speak about the probable extinction caused by the excessive development and implementation of AI in our society. We must talk about it with everyone we can and, in turn, urge them to do the same. We must create a public opinion that demands a limitation on AI development and the strict adherence to that limit—a public opinion that exerts pressure on governments. Ideas need time to propagate across the population. Everyone must openly advocate for AI limitation and exert coordinated pressure on governments and companies to collectively agree upon and enforce a permanent ceiling on AI development. Governments must then unite to discuss the matter and prevent any further AI development beyond the fixed threshold. ​This is humanity's final test: we can prove ourselves to be intelligent or be the foolish animals who brought about their own extinction through their own creation. ​Only by uniting can we avoid extinction, proving that the human being is an intelligent species. Alternatively, we can continue to overpower one another, constantly striving to grab more in a race for AI development that will inevitably lead us to extinction. Human beings can only be truly intelligent when they act together. ​It is not necessary for everyone to think the same way; it is enough that the majority thinks this way.
youtube AI Governance 2025-11-30T22:2… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwrNAOM3z-M3BzrYiN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuV0xh1_qktC6i9ap4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgziIFVwkI5wPw1RzMh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwUWA4Avz_rObjzOLh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyLZgOYWFeXUMo21B54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw8gqhmEEX9heVEdLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwkOux6XM-OAd-_qHt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyJy-Cmc7hmlvuvqP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyAPOe4PN3s82lXitJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxP7Nm66ffB1TJ5SWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})