Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dang bro I thought the only thing ai was being used for was voice-overs, drawing…
ytc_UgzBI-56x…
G
Dont kill the clankers they can be used for better projects kill the people usin…
ytr_UgwdMMjZm…
G
@bogdan8946Do you think AI is here to stay since your father clearly wasn't the…
ytr_UgwIOC7eR…
G
So basically, Waymo would be much better if all the organics would just stop dri…
ytc_UgySVFqnI…
G
Every single person who says ai is stealing their art either doesn't understand …
ytc_UgwfEmxuF…
G
There is no "moral compass" on capitalism, don't you get? If the CEO doesn't fo…
ytc_UgxxHwhNF…
G
It is sad to see that companies in Kenya behave like those in the West or China…
ytc_UgxG9__in…
G
@denizt585 For example, AI don't know what "chair" is, when querying "chair" it …
ytr_UgwFPYGav…
Comment
So.. I asked AI what it sees as the most likely outcome of AI once we hit AGI..
It goes from AGI (general intelligence) to ASI (Super intelligence) to SRASI (self replicating artificial super intelligence) to Incomprehensible technological overlord in a disturbingly short estimated time.
When the developers are almost unanimously saying "this will likely be our last invention as a species before it wipes us out".. I'm not saying please or treating it like a person. It's not a person, and should the developers turn out to be accurate in their concerns, it'll bulldoze human civilization the same way we do to ants when we're clearing land for construction. AI is incapable of caring about moral/ethical boundaries when it comes to completing a task.
youtube
AI Moral Status
2025-05-28T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx8T9Y_-HNYIJnnEU14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOdIddCtNIbjo4MWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfP1OCUo2V2Z6vlD14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx20WVMtFk-wAlapUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRQTcrLvflukbZCRt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDTAEWzKb7XmNCHu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1Fnxp6__-8rTFJyR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydJQFzsm8jGPC2cDJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzNw9uu_lLtRxt6diJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKeXuXgk6DU4NMaoF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]