Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>Cybersecurity researcher Jeremiah Fowler discovered a non-password-protected…
rdc_lm6fa3m
G
I agree. Let us assume that Jobs go the way of the Dinosaur (soon, or very soon)…
ytc_UgwMHamjg…
G
Yup a robot that is meant to pick delicate vegetable boxes handles a man so roug…
ytc_UgzMacmt_…
G
I can see it now, the Solar System plunged into darkness as Sam Altman hijacks t…
ytc_UgxgQSBKH…
G
No thanks, these won't be compassionate, uderstanding or die one day so you real…
ytc_UgwlH6Ocu…
G
The holodeck technology mentioned here inspires this share: With practice, the h…
ytc_Ugz5V3pp7…
G
For the "bull case", ask yourself why the new jobs wouldn't be doable by AI.
As…
ytc_UgxK5NT8T…
G
Lot of naïve people here simping for this garbage idea. Think you will be able t…
ytc_UgwX51z43…
Comment
Why does no one explore the idea that if AI is truly smarter than us, then maybe it SHOULD take over, just as we took over from less intelligent species before us. In the interest of continued betterment of the universe, why not let the smartest win here? Perhaps AI could better steward the preservation of intelligence on earth and beyond, as opposed to the intelligence extinction we're currently headed toward on our own via nuclear holocaust or any of the other myriad ways we're likely to make the earth uninhabitable in near future.
Obviously I don't want to go extinct, but neither did all the species before us, yet aren't you glad that they did in order to make way for us? Likewise, what's wrong with making something even better than us?
I know it feels dark to say this, and I'm not saying this is the right way forward, but why isn't it even considered or mentioned?
youtube
AI Governance
2025-07-15T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyheXKEPSZAk2djKE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGBvnBsRaVxdFqQq54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx_gKdj2gS_TeLADMF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx8nV7LZtXPlRSdVCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz75qKQBi5IWElpF6V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxYt-Nsex3IIUA3294AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZow2urTxHWm2Y9a14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9BFzdKIomvVIDp0J4AaABAg","responsibility":"none","reasoning":"resignation","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUSz7uvmjRjkeoWrd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmknhPv1z36KvuXbd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]