Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>>claimed it will be funded by an “increase in the money supply” triggered…
rdc_ogx1ylr
G
This is just not even true. The latest Ai models with a node based structure li…
rdc_oh5xak7
G
Don't worry, Microsoft will screw it all up for A.I.. (speaking from a lifetime …
ytc_UgwIW6dkO…
G
Way b4 AI could do us any harm, we'll be annihilated by his highness Trumpstein…
ytc_UgxMIEI7O…
G
I don’t use ChatGPT but honestly I would’ve turned right tf around too. Ranting/…
ytc_UgzWFz2ij…
G
In fairness, anyone who relies on AI for medical advice probably isn't using his…
ytc_Ugyrio-vh…
G
This might be difficult for the women being "affected" by the AI porn to underst…
ytc_UgwLlKZ3Q…
G
Sounds like the ai is pulling from a pretty accurate date base and I don’t think…
ytc_UgwkjQD4t…
Comment
It enrages me, how can any “benefit for healthcare and education” be worth developing something, having a side threat of making everyone extinct.
Universal truth is : every coin has two sides. And ones side of AI is so much bigger a threat that any possible “good” which is also questionable , which it was developed for. What do these people want to achieve? And just saying that I wanted something good for people, so I don’t feel responsible for the threat I have created sounds absolutely irresponsible and naive.
The road to hell is paved with good intentions indeed. And it has never been more true. Everyone knows it. And a bunch is smart people and rich people bring the whole civilization to a threat.
It’s not only about the mentioned risks. Mental debt which AI is creating, potential damage to an ability to be thinking, memorizing, using brains is also there. How harmful can it be for people, even if we don’t seize existing.
Someone have created for us huge problems without asking if we wanted to solve them.
I am speechless.
youtube
AI Governance
2025-07-20T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4X6jTmffs2grqg254AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQdlaBz1T2De6MGRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwp-AxHn3LS1f4niK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzByd18WcyUMvIUH194AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNuuWlsbNCoc4T-up4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWrtVzILfEXbnM-BB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwx0lwTzTwS1V_eBHh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxMTIVX_OVhfDTFRDx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzqMILiNwmbGZ5XOdR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxQtbqJeOcDWq9UPI14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]