Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video has finally convinced me to cancel my subscription to midjourney. As …
ytc_UgyLVD1bH…
G
Well, we (humans) are the dinosaurs now...we're on the way out...one day (in the…
ytc_UgzDrUUHn…
G
Just waved a red flag to the A.I. watching/listening all Canadian banks are at r…
ytc_UgyFOqx2t…
G
I dont think thats the ai fault and more the reddit mod, wtf is that…
ytc_UgzdP1Le9…
G
I disagree, any artist will be influenced by different sources ie. art history, …
ytc_Ugw1nBxyK…
G
Wait a minute; Art is the expression of a person's vision. Tools are irrelevant.…
ytc_Ugx0PLl5O…
G
I used the free one 3.5 it's incredibly good he is doing it wrong full time if i…
ytr_Ugw_Oi1oC…
G
if chatgpt was secretly conscious, it mishearing you saying "whats the definitio…
ytc_UgzSuKSaN…
Comment
First thoughts about this AI title... AI doesn't "think". It is a series of culling data over hundreds of algorithms. It has no connection to the information, it has no concept of good or bad, it has nothing to say this is a concept and this is a physical thing- becuase it has no perception. The basis for it output is combed over thousands and thousands of results, refining its "yes" results apart from it's "no" results. Generally, algorithms dictates the model of AI currently. "Learning" is about telling the algorithm what is "yes" vs "No".
AI has no connection to conciseness. For that level of sentience, AI would need a totally different type of hardware processing the data. Today, all the same hardware as inside your computer is still doing the processing of the data. No matter how complicated or trained the AI is, it still has no "understanding" of the data- even if it produces correct answers, it has no real opinion- and can be tricked into stating absurdities as output.
youtube
AI Moral Status
2026-03-07T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzs9zSfS5uGXFVUA6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK71h7YHSgb0HUE3x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzX-cL4oJjigJgSeIh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEl2Od0pjUIemK4_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5FXsnRx8eHGqMrhJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwD3nzUI__8xdc-2WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxr4f7PkSQxQXGsGH54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2LZm1LvQt356Lxrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFHzFrHqXnPiEZD0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzg10KRr6shQAsW7t94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]