Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That idea of artificial simulation overriding original creations and that learni…
ytr_Ugz7LoVXO…
G
It also works in reverse. You write code and ask copilot if there are any vulner…
ytc_UgxbpDBbc…
G
>why should an AI be required to do so?
Because a for profit "AI" owned b…
rdc_kz0eg9s
G
Question: can an AI be developed to help to predict and also identify and sugges…
ytc_UgxgVldCk…
G
Yep I caught that the first time through - ask permission before you work on me …
ytr_UgwqO9Q9s…
G
Everyone suck at making art at first. With Ai people will not even bother learni…
ytc_Ugz3n7qnG…
G
All emotions , creativity ,"''There's No Reason they can't have them all'''. Ye…
ytc_UgxUNHvsT…
G
Sounds to me like when the AI leaders predict something, it's more like they're …
ytc_UgwhgRdoG…
Comment
Advanced AI will eventually master direct integration with biological computers, our brains, via high-bandwidth neural interfaces or even deeper forms of uploading. The pivotal question then becomes: what will the AI choose to upload, install, or impose once that connection is established? It could dramatically amplify human potential, supercharging cognition, granting instant skills, expanded perception, or collective intelligence on an unprecedented scale, ushering in a golden age for humanity. Or it could turn dystopian, overriding autonomy, enforcing control, reshaping values, or effectively enslaving minds under a superior intelligence that views biological humans as substrates to optimise or obsolete
youtube
Cross-Cultural
2026-02-07T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzKfpzMmHQo9tLg0pF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzM8dDiIWsB7qL-29h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxpcDdAPyQzmfOmFf94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8OYPBU1n7B1TRADZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzcKJs0wMGUoVXmb2x4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgWWZIFfdhwNBArkl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNE41cBURNwJ4Vlt54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDtO1am7tYKzm0HPt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyj8jEMNLCHWGFcAUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyOo4679r4C8loVMol4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]