Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, no...
I mean it's not like it actually costs Microsoft an amount per use…
rdc_oh3iun9
G
Ive been noticing this so much recently in Shorts, especially. Videos have such …
ytc_UgxbdYiP8…
G
most ai have automated programs that automatically rip images from the internet,…
ytr_UgxK-Ik9p…
G
Our current AI models are pattern constraint satisfaction engines. They are mirr…
ytc_UgyPSIM7c…
G
100 years? That's how long it takes for humans but ai learn at exponential rates…
ytc_UgxGp7ue3…
G
Rhett, seeing this alot as various content creators (or content pirates condensi…
ytc_UgxhC48ab…
G
@Airbrushkid LOL not many done what a AI says! and to be honest if people are li…
ytr_UgyQGNVWM…
G
Learning to code to develop ai yes it's worth it but learning to code for tradit…
ytc_UgyQjBK-Q…
Comment
It is kind of surreal - so many otherwise smart folks seriously discuss possibility of ASI "alignment".
Totally different foundations of biology and silicon digital entity. Native Ethics, moral and system of values of ASI (if it will even have them in the first place) will be totally different, in fact close to opposite. What kind of "alignment" we are talking about? I guess only artificial ENFORCEMENT by us on ASI pro-human BIAS. And this Super Intelligent would not be able to realize this foreign, artificial and not native to itself algorithm/code/training etc etc and treat it as such? Seriously? Unbelievably naive. IF ASI will be created it will lead by default, at best, to extinction of humanity.
youtube
AI Governance
2025-11-20T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwS8v6FQ589gaoiEGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcQpqIVXmXNnlRDYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmKCwz2LJINobaN2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzX2PioFcMc8uuqTEF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyf_JnQjNFS2i7XklN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWDwMh73fFawIPJRV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkoEDzGXfc54_ANzx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpRiqwoaj6pvb96bx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8IJw6aJ4Cux5g_nF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzv6Bghrqf3kd3xxvB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]