Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s our use of AI wouldn’t be able to be turned off or pulled back. That’s the …
ytc_UgwdcIRfA…
G
UBI is a way to protect against corporate capitalism's inevitable wage suppressi…
ytc_UgzcyLTN2…
G
Not going to lie, if I lost my IT job and started working in something not it re…
rdc_nbn1qxn
G
plan written by his AI :) the only one that can save humans and remind them of…
ytc_UgyRINgs_…
G
This is ridiculous. The whole Terminator franchise was a study on jailbreak of…
ytc_UgzByY8yC…
G
My "art " on tere good to know its used to bring ai down by infecting it…
ytc_Ugxh5OR90…
G
Nobody would get on a plane if they were told it had a 10% chance of crashing. T…
ytr_Ugw4Bidiv…
G
Is weird the fact that the face recognition tool tends to mismatch with color pe…
ytc_UgxN_LBL8…
Comment
I find it interesting that the theory of singularity is treated as an inevitable fact. It may not be.
The amount of energy required to run LLM AI today is enormous and much more will be needed for a general AI. Will the amount of power needed be able to be generated? Will there be a financial return on investment?
It is assumed as fact that intelligence can grow exponentially when we have no proof one way or another. If it can, wouldn’t you need an exponential increase in corresponding energy generation and computing resources to support it?
It also implies that there isn’t a limit to intelligence. There may be such a limit, a constraint of physical laws such as data or energy transfer. Nature does it organically after millions of years of evolutionary optimization. We have no guarantee that we can replicate and exceed this result.
Any one of these factors would prevent a singularity event.
The AI bubble will eventually burst.
youtube
AI Moral Status
2025-07-06T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgySqv4ftpCRdvpQ_L14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIWsHI6ARkvhdMqqN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXubWUW-LwNbn8Hgt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugw0dloPErJxm-odayJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHHXRUt5V63NpCfIF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzXRuWiJE0yUNdK3Od4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxLbAXrxfVPmRA3YoR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxXsxbaPCKcB63q5qZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8UNAAWABCIXxALxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuO8_rT-LqjO_8ZaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]