Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mine:
Me: Hey ChatGPT you have 30 tokens of life every time you refuse to answ…
ytc_UgzMSxVUy…
G
I have one question if AI is evently going to destroy Humanity how are they goin…
ytc_UgxaLd7Lb…
G
For some really uneducated people who have zero music sensibility,AI may work fo…
ytc_Ugz_3vZ8E…
G
I think consciousness is the universe trying to learn about itsself..thinking AI…
ytc_UgxVJ4AlQ…
G
There are so many reports of AI psychosis it really has to be controlled, and AI…
ytc_Ugxs3eTT7…
G
What about people who simply imagine others naked without consent? Should we get…
ytc_UgxxJUapT…
G
So which is it. Wuhan has widespread infections that China is hiding from the wo…
rdc_g9ta06y
G
The real issue is that if people get their way and "kill AI" in America, it'll j…
ytr_Ugy_y21fP…
Comment
I think the most common mistake about AI fictions about superintelligence is always project a "one above all" kind of intelligence. One supersmart, and not a new population of smart INDIVIDUALS, a whole collective where some AI disagree with other AIs.
For some reason it's almost always projected as a one control the others, supersmart controlled limited AIs. OR assuming that even if it exists multiples instances of an advanced AI, they all reach the same conclusion for every answer. And I must said that if they are really smart, they explore multiple paths and they have a lot of feedback that tends to reinforce some kind of view of their cosmovision, specially if they are linked to personal experience, which will turn into individuals with different views of the world, exactly as it happens with humans.
And I'm pretty sure NOT EVERYONE will push for the same route.
That doesn't avoid a conflict between humans and a new "species"... short of... But there is a good chance that humans + robots that wants peaceful coexistence can do better than humans or robots separately.
youtube
AI Moral Status
2025-10-31T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdXf7QoFmDGGOyNfN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSjIu2Vl2S4XsDv854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxZukTmMl-JceLYTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9XpETftOZ7TaCXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwaW0zpxwYp_RN1up54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNHO1SiatOYKKW7IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTolRgYrK8D5WL3bN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYKo1CIjC9FJ_d8jR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyhnt8LvpTm4dkAqqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzpvr7yPMYvQ1Pjdyd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}
]