Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I. needs regulation FAST! Because all this is a VERY possible and likely outco…
ytc_UgwH7QuKw…
G
I am a dentist who work 12 hours a day, 5 days a week, i have deep passion in pr…
ytc_Ugz841pNi…
G
And there isn't a lot of code for most problems people actually solve on github …
ytr_Ugz1Rwm3d…
G
ai isn’t gonna replace y’all, chill. im also an artist and you have to realise t…
ytc_UgxD5JY6P…
G
Hey Hank, topic suggestion:
Im assuming the limit to AI right now is the workin…
ytc_Ugw_NX6TO…
G
Do y'all remember back when the deep fake technology was presented? How people s…
ytc_UgzZopNMH…
G
You get what you give - which is sad because some people may never see this side…
rdc_nahu4ex
G
AI wont take all of our Jobs, there will always be new jobs, like Police, infras…
ytc_UgwkNmUPy…
Comment
We can't hope to understand an AI or AGI when push comes to shove, because we refuse to do so for ourselves first.
These systems are being accelerated based on fear and greed, and we should be cultivating them based on love and understanding.
Unfortunately, love and understanding don't net a company USD$4.4 trillion overnight, and as long as unregulated greed push these models into reality, the human race will only get to pray to our engineered gods for mercy, and hope they only use us long enough to push out to other planets to perpetuate themselves in the stars.
youtube
AI Governance
2025-10-22T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxhOCQzwxbyr7hKraN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYqtJ-MBFJPPvuraN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDg-LMZeREtXCKDh94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQxoKgFnWaBc_Aiyl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyB645BQk0rM9CbzXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnrvMWXZG_1oMAjNF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFh35U7dH9mqFQDIZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxX35whlfJ2_6sq_Gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0odXpYFb9uMEiRI94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyJl32IL5osqpRxqAh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]