Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw this funny post today on Reddit. Someone said "you don't have to pay for A…
ytc_UgwWyW3vQ…
G
LLMs can't reason. You can't get AGi from LLMs. If a company had the ability to …
ytc_UgzAiWGvC…
G
And the numbers in this video are based on up to Generative AI. When AGI, as in …
ytc_UgylIRJLB…
G
When you have an atheist ai genius , you realise that when it comes to consciou…
ytc_UgwZWJr3h…
G
These companies aren’t realizing that this is a trickle down effect. If you fire…
ytc_Ugydbv_tU…
G
Computers already automate over 99% of work. I’m not programming logic gates or …
ytc_UgzqXjHKM…
G
@willstikken5619 actually level 4 allows for fully autonomous driving it's just …
ytr_UgwcHWCMF…
G
You can not compare AI with Internet and phone revolution, this is something dif…
ytc_UgxsT4Vbz…
Comment
There are only 2 reasons anything would want to wipe something else out, either because that thing poses a threat to it, or that thing is suffering and its suffering needs to be ended. Even if something is prey it does make sense for a predator to wipe them out, as they need them, for this reason I think The Matrix scenario is a much more likely end of days by AI scenario. Right now, AI is not intelligent, it is merely a lossy compression database of relationships, that may change when training becomes part of the usage of a model. Dystopia sounds like a more likely scenario IMO.
youtube
AI Governance
2025-06-27T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxwPfbt2VQGYlhTV2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwYCc-uDGaAuI0OQ6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPhX4fqzxhqTqqkN54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy2X41lSoGdTpKUqD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwUlUR6JYcgxHUZTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyH3paqmzWXfPgCqjN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxgd1RPTt84-4nuiot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeW0NzJf2CoXClD2Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwT3lLhDBeZXHkjLh4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlS3ID7XjTGhH-o8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]