Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Take your art put it through the ai and edit it. Tip for the wise use starry ai …
ytc_UgxLriCzk…
G
Nothing we have today will ever ramp up to come close to that ability. LLMs are …
rdc_kyh5ho6
G
At this point we should wonder how easy it is to make sentient ai.
Humans have …
ytc_UgzKvaI2n…
G
In short she very smartly said get with the ai program or fall below poverty lin…
ytc_UgzHtmlCr…
G
We don’t even understand our own consciousness, or how to proove something or so…
ytc_UgwKbaSEL…
G
"Most likely..."
This coming from a guy who, when he designed a truck, it turne…
ytc_UgxcA1Rcl…
G
Creative here: my prediction is that AI will 100% be better at creativity than h…
ytc_UgzyTf-72…
G
Don't listen to him guys, he's just mad that AI is expanding without OpenAI's st…
ytc_UgzFNtxTl…
Comment
I have a random idea, give chatGPT pseudo-emotions. I think one difference between us is that we're biologically programmed to survive, so we do things with that idea subconsciously in mind. hypothesis: if chatGPT will be given pseudo-emotions, it will override commands.
youtube
AI Moral Status
2024-08-09T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxh5H7xRiizMqegY2R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzScuetmQ8DkdJZc194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzD0p7VuZuxAOAM2F94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPPn7MAxC7KXH_EQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeyNSTIA5qwdtles54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUNOSvNOi4LFOY98R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwH2F0GDEe9ffqxBuV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzHQh5HoMKrCX99CDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwVe8K9AsAccvydXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUribNEerCgJT1vzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]