Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We should all be vigilant about what we watch and apply critical thinking as to …
ytc_UgwTOOnAQ…
G
At first is totally based on nationality, (In the case of Canada if I was an EU …
rdc_dd3xbwe
G
We didn't evolve to be racist, we evolved to save ourselves from dangerous anima…
ytr_UgxFcJPYy…
G
I do not read AI robot to tell me the story of these Palestinian losers/terroris…
ytc_UgwDei0yV…
G
I sincerely pray for those working in the art field. Most people underestimate t…
ytc_UgzCEWK8a…
G
The DANGEROUS paradox is that the quickest way to win the Ai race is to let Ai o…
ytc_UgzC0WCmr…
G
When the more AI increasing the more women becoming the toy, that's sad and very…
ytc_Ugy3gs0ZS…
G
This is a gross misrepresentation of AI. AI is not this complex and is probably …
ytc_UgyQg4BSG…
Comment
We are locked into a nuclear arms race, this time it is AI. However, albeit horrifying, Nuclear weapons can be controlled. ASI on the other hand is something completely different. Humanity is pushing as hard as it can to create a super intelligence, knowing full well, that that superintelligence will destroy us. You cannot control something for very long that is smarter than you, and nobody wants to stop creating it, because if you stop, the 'other guy' will beat you too it. The irony is astounding.
As far as morality, there are a few researchers honestly sounding alarms and not moving from one cash cow to another. The one everyone should be paying attention to is Roman Yampolskiy, PhD, University of Louisville. He is the number 1 AI safety guy. He has said that NO company is working on the 'stop switch', we will achieve AGI soon, the recursively ASI will come shortly thereafter. His future predictions...well let's just say, it's the stuff of nightmares. Look him up.
youtube
AI Governance
2026-03-18T17:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwg1IREA1Yk8wYD_b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsKmMJL9bT7DUyB8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyfKTCIb1co2g1Glwt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx4VUYQrncQCgnQuDx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyWEt5vd1Nqmy_lbK54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvQtW_pFMYcQLGKk14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwn33U0iDSszytjN6J4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2zD7BCpSzRYZzInB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgywuKKpI_OOMyUS8v14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdKjQ5deBJfbd-gv54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]