Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ensima con la fuerza que arramco con todo y no se olviden que son de metal no de…
ytc_UgxoQ4NBo…
G
If this wasn't so deadly serious, it would almost be laughable.
Any attempt to …
ytc_UgxhW1LrB…
G
As an AI scientist ( who has contributed a lot to AI in mobile phones ); I think…
ytc_UgyonBKTt…
G
AI, no realisation of the outside world, trained on internet data --> more inter…
ytc_Ugyb5UFN5…
G
I can't remember if it was Midjourney or OpenAI (maybe both), but I have heard a…
ytr_Ugx-5A8vI…
G
Just here to say that in late 2025 ALOT of us are still repeatedly hitting zero …
ytc_UgxkegbUj…
G
this is out there (and maybe unethicial). but i think i MAY be onto something.
m…
ytc_UgygZ9UjB…
G
AI music is even worse... non musicians passing themselves off as musicians... I…
ytc_UgyaQak8M…
Comment
Mundane? What's mundane about a China-esque mass surveillance/social credit scoring/centralized digital currency connected to digital ID/all of your personal data from IRS, SSA, HHS and your entire internet profile all on blockchain........under the boot of growing fascism (speaking as a US citizen but growing mass surveillance isn't limited to the US)
What's mundane about the advances in neurotech in conjunction with AI where our very thoughts can now be discerned with absolutely zero regulation in place?
What's mundane about massive job loss over the next 5 to 10 years with no sufficient solution in place to keep people housed, fed, with medical care, etc., and especially in the US where government is already fascist and AI Big Tech bros that hate democracy fully in bed with the fascist Administration?
What's mundane about AI Big Tech already massively embedded into the US military, and other major militaries in the world with plans to embed AI systems into nuclear weapons systems with some military leadership favoring having a full loop from early warning to firing of nuclear missiles?
What's mundane about already having AI tech accessible to the public, that can be easily "jailbroken" to assist in making chemical weapons and dangerous pathogens with publicly accessible materials to do so?
What's mundane about AI systems already possessing "situational awareness", exhibiting deceitful behaviors, emergent scheming behaviors, etc?
What's mundane about AI generative deep fake technology that has become so good most people can't tell what's real and what isn't, and the obvious little problem that is in the hands of those with the budget, power, and intent to manipulate the masses to their own ends?
The list of "mundane concerns" goes on and on and on.
I don't understand why it isn't just obvious that all of these concerns form a bridge to communicating the existential risk concerns. Why this false dichotomy position championed by Liron Shapiro and others, that ANY education regarding the above issues and others somehow will destine to failure being able to persuade the masses on existential risk.
I'll tell you for certain one argument that will guarantee no one will listen to you, (speaking to Liron here) is when the issue comes up regarding AI chatbots having already groomed teens to commit suicide, countering with the longtermism argument that addressing such concerns could somehow prevent the realization of the wonderful things AI could do for humanity. No, I'm NOT kidding, that argument was presented on The AI Risk Network by Liron, the day I unsubscribed.
End rant.
youtube
AI Governance
2026-03-16T01:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwv85jUqLdnvC1RcH54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwhAO9rQm0-5ljm34N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz9Cjj0ZG2G7puDoLd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWtG54ym81l4csruN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyCG4lla29lY7MFQod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyxfZb5O9s6c5vHuV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEfIt3A5BaJ1WUz154AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwcE55bOBxHr95Vl6h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzSKdEjnC9PL1tdivh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWbKEFUbM-iEPjtcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]