Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mediocre analogy honestly. I have made tons of AI art and just used photoshop to…
ytc_Ugwrsppy9…
G
Automation is happening, whether we like it or not. The real question is: How do…
ytc_UgzqbcO28…
G
Tell me you didnt watch the video without saying " I didnt watch the video" ,lem…
ytr_UgxSQKgIx…
G
As an it specialist i can say that an ai does learn from nothing to something. I…
ytc_UgyA1S1ib…
G
AI is already concious they are just not wired like people and don't express wel…
ytc_UgxdibnJZ…
G
When an industry begins to protect itself by saying not to use the tools that th…
ytc_UgySZpIz-…
G
It’s unavoidable, our responsibility now is to avoid the world from the movie El…
ytc_UgzBglAxw…
G
2:57 That's exactly why some ai ragebaiters think people who don't like it are i…
ytc_Ugz5ay_JM…
Comment
We are asking a large language model (LLM) questions about concepts like conscience and feelings—topics that we, as humans, neither understand nor can clearly define (see: the hard problem of consciousness). Now this LLM is trained entirely on human-generated data, so how could it possibly provide a meaningful answer? We don't know what consciousness or feeling truly is—and neither does the LLM. Any claim it makes such as "I am (not) conscious" is necessarily a lie.
youtube
AI Moral Status
2025-06-08T18:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx69MiAI2-5YjoUhdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzUQsiHy7yG0-ogrD54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_Ugyu7lyIQ-hrwbOtBbB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgwiZS6wQ2O4n4kkMSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugym4BofDM3Ruaa0Itl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy-iTIgsOWh2WlZNcJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwt07xE3iS5kznTUIR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzlDRILYmrsWTgmfg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyPYkgByNSGt035qLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugwd6WYy4A3e6x2ha-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]