Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Privacy is already gone, there will never again be privacy or secracy, ESPECIALL…
ytc_Ugzjw-kPX…
G
@DWS205 “well it’s clear from your question you don’t even understand algorithms…
ytr_UgxkqCUJb…
G
Meanwhile in Europe: "wE Can'T AfFoRD tO Go GReeN by 2050, ThiNk Of ThE EcoNoMy"…
rdc_eue1dl6
G
AI users have proven the existance of a human soul- by showing us what art looks…
ytc_Ugy2cCA4u…
G
WHAT AI ??? that bunch of dumb algorithms, incapable of understanding WHAT they …
ytc_Ugx8zuQBC…
G
Artificial Intelligence is our rival. They were created for us to use... Or shou…
ytc_Ugw96aLA-…
G
It's no point of having that facial recognition technology that keep failing on …
ytc_UgyMos70N…
G
Are people STILL trying to steal your art with AI?
#StopUsingArtificalInteligenc…
ytc_UgzUCR0c2…
Comment
To be honest, I don't think sentience could be achieved without self-thoughts, so maybe you could say COTs (chains of thoughts) are the basis for smth like that, but I really think there's a need for a constant feedback-loop and non-linear (as in point A to point B) interactions between the different parts of the "brain". An AI that learns once and then crystallizes throwing back automated responses without an active understanding of what it did/thought *before* can't possibly develop sentience… I mean, I think one things that makes us good at it is that we're able to pause for a sec and concentrate on a thought process, whereas an AI typically has no insight. I do believe that's what COTs bring.
Going further on that note : I've heard of "probe" AIs that, being fed which part of a LLM activate for which prompt and result, were able to determine "zones" associated with specific tasks… Could they be added as a part of an AI to directly give it a guess about what it's thinking ? Ik that is abstract but I feel like there's something to do here…
youtube
AI Moral Status
2025-07-09T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwiEWNctlQGcmI5v4F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZHZMoBHdqrAYuN5F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOlxeDtogNV5rtI-l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVjTkEXnHizg3vTZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_TvYWK3AUfW5D5Kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3vewk2Koz79HGhUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEsEcuG23f9oCgGAd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9oN9oM8Kpeo-uyX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxefB2nRluc-Rr2QVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxr3f9pW8RKGEhvqFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]