Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is what I meant when I said people are waaay too clam about the way AI is a…
ytc_Ugx2Xq3Kr…
G
Not me telling the ai bots that I’m pregnant and then having my character die of…
ytc_UgxGy4eug…
G
This doesn't make sense. South Koreans drink more than Irish or British people. …
rdc_clvcymb
G
This is so scary and FU**ED up on so many levels. First, this cop and all involv…
ytc_UgxL1rUM1…
G
@l@lynco3296 The lack of regulation is the biggest reason. We also have polluti…
ytr_UgxeygMGu…
G
Not a single job has been replaced by AI yet, but it may replace some jobs in th…
ytc_UgylqmhYv…
G
The only time generative ai imagery is kind of ok is if its for private use...li…
ytc_UgzyhWmQ8…
G
The models we do have cannot code in a way it produces scallable software. Self…
ytc_UgwZcFl-G…
Comment
There is still a possibility to secure our survival.
We must urgently define the precise limit beyond which AI development poses even a small probability of leading to our extinction.
Research must identify the point of no return, determining the maximum extent to which AI can be developed without provoking mass extermination.
Governments, along with lobbies and individuals, must renounce the pursuit of supremacy over one another; instead, they must unite to establish a limit that must be respected and enforced. The most challenging task is achieving global unity to respect that critical threshold.
But what can a single human do to achieve this monumental goal?
Each of us must speak about the probable extinction caused by the excessive development and implementation of AI in our society. We must talk about it with everyone we can and, in turn, urge them to do the same. We must create a public opinion that demands a limitation on AI development and the strict adherence to that limit—a public opinion that exerts pressure on governments. Ideas need time to propagate across the population. Everyone must openly advocate for AI limitation and exert coordinated pressure on governments and companies to collectively agree upon and enforce a permanent ceiling on AI development. Governments must then unite to discuss the matter and prevent any further AI development beyond the fixed threshold.
This is humanity's final test: we can prove ourselves to be intelligent or be the foolish animals who brought about their own extinction through their own creation.
Only by uniting can we avoid extinction, proving that the human being is an intelligent species. Alternatively, we can continue to overpower one another, constantly striving to grab more in a race for AI development that will inevitably lead us to extinction. Human beings can only be truly intelligent when they act together.
It is not necessary for everyone to think the same way; it is enough that the majority thinks this way.
youtube
AI Governance
2025-11-30T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwrNAOM3z-M3BzrYiN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuV0xh1_qktC6i9ap4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgziIFVwkI5wPw1RzMh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUWA4Avz_rObjzOLh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLZgOYWFeXUMo21B54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw8gqhmEEX9heVEdLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkOux6XM-OAd-_qHt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJy-Cmc7hmlvuvqP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAPOe4PN3s82lXitJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP7Nm66ffB1TJ5SWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})