Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey @jonathanbravojaramillo3761, gracias for your comment! With advancements in …
ytr_UgxEaq3AD…
G
seriously, a lot of people are like “someone will have to program the A.I.” and …
ytr_UgwLvNbts…
G
Why not just say that while using ai goes against the spirit of the challenge, n…
ytc_UgxFF74kZ…
G
As a writer, I can see the value in AI. I can toss my description into the pot …
ytc_Ugx5zMOcY…
G
The real danger of AI is if Neurolink will evolve into wireless direct access to…
ytc_UgxtsPfw4…
G
That's what I was thinking!! Like, if AI is used to "storyboard" or "pilot" a se…
ytr_Ugyrrre54…
G
This is how I use chatgpt. I always start a session by telling it something like…
ytc_UgyBxzHlW…
G
This is not made by DUST, this was made and originally posted by Future of Life …
ytc_Ugwo2LLee…
Comment
Thankfully, superintelligence is not imminent within the next hundred years or so. The development of such an entity would require faculties and infrastructures that are currently nonexistent and would be alarming to even consider. Moreover, the energy demands for creating and maintaining a truly autonomous AI would be immense and impractical. While contemplating the possibility of superintelligence can be intriguing, rest assured it is not progressing anytime soon. The technological, ethical, and resource challenges make the creation of a fully autonomous AI function highly unlikely in the near future.
youtube
AI Moral Status
2026-04-21T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzRZxV44Xq_XrvMYLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxd9yjVDlkDbhzuklN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1D5G7yRwA1uz1eWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOnKnJqdDarjRCD5l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP4PPXP7EztDu6B794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzr_FOQ-xWU8KhMZA54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2nxLIt_-nSWQZ0pB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwV-o3knFvNjuVlarF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxT5heeU-CmjE5Vbex4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyB4C2m6E9M9cfYvqh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"}
]