Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until genuine intelligence becomes a thing, I feel like "This is how the sentenc…
ytc_Ugwm892hq…
G
Are you sure about that? If you are referring to LLM models, then they aren’t ev…
ytc_UgyZIgOMz…
G
My God just leave these beautiful animals alone already. Nothing and no part of …
rdc_dv64yqp
G
If you are talking about monetary benefit then it depends on consumer need and t…
ytr_UgzOVnq4u…
G
THE QUESTION EVERY AMERICAN SHOULD BE ASKING IS WHERE IS THE LEADERSHIP? WHERE …
ytc_UgzbBiAck…
G
@theguywhoasked2597 exactly, that is exactly how I feel. AI art by itself is laz…
ytr_UgyF8V3sw…
G
Google absolutely should've blamed this on the Ai being trained on the popular m…
ytc_UgzGOUgiZ…
G
VA senator, Mark Warner needs to be PRIMARIED OUT! He is PUSHING these AI drains…
ytc_UgwYJ8g4G…
Comment
What if it already is? Like sure, LLMs are still quite dumb if you know what questions to ask, but it doesn't seem to contradict the idea of conscioussness. For all we know, subjective experience _seems_ to be related to electrical signals in the brain, but you could say that machines are already running on electricity. You could argue that a very complex arrangement of computer chips is not sufficient to create consciousness, but the same could be said for the complex arrangement of cells that make up our brain. In the end, consciousness is such a mysterious notion, so much that we can hardly make any assumptions about what is conscious or what could be. I personally believe we will never have a model for consciousness, because it is defined subjectively and to me this goes against the core idea of scientific reasoning. But if we ever come up with such a model, I would not be surprised to learn that it considers ChatGPT to be conscious. I even read about a neurologist claiming that he would not be surprised to learn that an Iphone is.
youtube
AI Moral Status
2025-01-29T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpGmlFAuWdTl0aYC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLLs5wtRVupSMbRd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugz5tnJQ0bVGcVRmbMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZfBi0RFg8kNDuN6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFGGPi4JpxTj3j40J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxegWrdlivSZGTeIa54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyOBUHvoc3qhc4nB2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7HSeaQoWUtvDv33x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzhpClrH2O_TSKdSB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxndD5OHEVECmVXUOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]