Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I disagree with the statement ai is inherently unethical. I think you might be c…
ytr_UgzC9AIUx…
G
This guy is dangerous, dont listen to him, super advanced AI, or AGI is extremel…
ytc_Ugw57O4g7…
G
DISGUSTING. Logic and Compassion seems non-existent in SK society. I pray for st…
ytc_UgzU-m8nu…
G
Click bait! AI isn't here to kill us. But it knows whare all the ped0s sleep ill…
ytc_UgwavOeTh…
G
AI cannot become conscious. That is not possible.
Yet, the danger is not in AI …
ytc_Ugz4lpN-p…
G
Big tech backed trump because only someone lacking a moral compass could force t…
ytc_UgyJAqSfv…
G
Management claims it is due to A.I (Artificial Intelligence) but it is actually …
ytc_Ugw0irmUx…
G
I mean can't you program reasoning if you can program information into a program…
ytc_UgySAQuxK…
Comment
I think I'd have to argue that you overestimate our ability to pull the plug on AI. It's not like nuclear power which has a resource governments can control. It's a computer program that can run on laptops. It's much more easily distributed, duplicated, hidden, grown.
And even if we halt, China & Russia won't. For good or ill, they'll get ahead of us, and once ahead , it could be impossible to catch up as more powerful AIs will build more powerful AIs much more quickly, leaving us behind in the exponentially accelerating dust.
And if the worst case scenario happens, it won't be Hiroshima. It will be a super intelligence that can outthink all of humanity created in a despotic country with the infrastructure to control their populations already in place. Our only defense will be a super intelligence of our own.
Frankly, I think the genie is already out of the bottle. We're going to have to learn how to build guardrails at the same time we're hurtling down a highway without brakes.
youtube
AI Governance
2023-03-31T15:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweKnRttT0A3_N4xVZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzq2lzR2K4bd2AXWJ94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVJzYBd4PntY_Q9714AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKEuY1oPO2Czbr09d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRYpkAu3CbG6W0sWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7tqqVqDr5pMIhSC54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCF9JFQRQbu8b_KrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy30UJP1X_qhFhXTuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxOSOHEgsY1BNy1eR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytlOUcdWnU1p0IRcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]