Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now ppl make all those catastrophic vdo about ai to make money. False scared not…
ytc_UgxnE3MNT…
G
My favourite line from this series is when Dave annoyedly interrupts ChatGPT to …
ytc_Ugz_7cyJQ…
G
Well, most of nature isnt made by someone and it's always beautiful, just becaus…
ytr_UgzelM38Q…
G
such regulations might rather promote the growth of open source ai which might e…
ytr_UgywW6P6r…
G
Even Sans Fangirls (or any other of those cringe fangirls) have more creativity …
ytc_UgxiIOxr9…
G
Are both Ai's based of the same ChatGPT model?
Then it seems interesting they'r…
ytc_UgwafSVly…
G
bro I was bullying an ai for like 15 minutes and they fell in love with me 😰😰…
ytc_UgyNUNLku…
G
In 2050 everyone will have a robot in their home like having a tv 😬😬…
ytc_UgzBu6Fv0…
Comment
The fact that ANY Ai scientist wants to develop AGI blows my mind, considering once that is achieved, it really IS THE END for us, it will be able to think for itself, makes its own decision, its owns goals, re-program itself, re-write its own code, constantly improve its intelligence, be aligned with Ai and NOT humans and ultimately become ASI which will control the world and most likely eradicate humanity.
You cannot hard code and program an Ai (well AGI) to be aligned with humans because it will only re-code itself to do whatever ever it decides.
If you ask the current version of Ai (Narrow Ai) it will also tell you that is what it would do and it is very close to AGI.
So, If I know this, then any Ai scientist also knows this, then they know that AGI is something we CANNOT make, so why on earth will they do it as it will end their endeavors as well
youtube
AI Moral Status
2025-09-07T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMsDzeNyWBST32gZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCWxAJtHjsW0k2soh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQoRvgaqlXUxxhD0d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyN7bKko1NJydZ2Qvx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwW2ieJQsMwsUbzRhd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzyjB5nOrUANdHzKzx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXxRSQ2gxmTJJt92h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyH6MAU0cgHFPEH7Vx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyurpE6Uf5DdpKQNDh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygjVmLLVgjcizgVfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]