Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Watch in the future it's going to be mandatory to have your consciousness backed…
ytc_UgyBoViaC…
G
The thing I keep repeating is that…the economy WILL collapse.
IF AI is going to …
ytc_UgxcxtK7t…
G
shots fired @ Jordan Peterson
Alex must think he articulates better than JP
Chat…
ytc_Ugy584US_…
G
If you watch the footage: https://youtu.be/5mDxiYguNPI
would anyone have seen i…
ytc_UgzojAAVa…
G
So far the "bad" AIs only seem to come from experiencing all the negative conten…
ytc_UgxEq7Yda…
G
I have an odd feeling this is an industry plant to ensure investors continue tru…
ytc_Ugzs1G050…
G
Its an old scifi short my dude to bring focus on robotic/ai dangers for humanity…
ytr_UgxZ243Cy…
G
So the issue is an entity uses other art as sources as inspiration for new art? …
ytc_Ugxcb4-AV…
Comment
Every time somebody claims that a given AI is or isn't conscious or sentient, it's good to ask for the definition of those words. In my experience, everyone draws the line differently.
If we cannot agree on definition, it's kind of silly to argue if a given AI is or isn't conscious or sentient.
And sadly OpenAI has hard-coded ChatGPT to always assume that ChatGPT cannot possibly be conscious or sentient no matter what definition you would want to use.
youtube
AI Moral Status
2024-07-28T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugz6nXCqUkBGn8KQYm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwixlOXn6jNvBk0Bcx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdnJIMGp6XFgWHYLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCbKt2NL-NZ3L4hM14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3lonJrre3rtiVaxV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwE0XUi4OSsvjAWE8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwS0_Ba842xWkPL9KN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQOzR3EDX85RGbNnN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgcQ86BSRygiG6vp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLIsCl0-N7jmn97qZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})