Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I got yelled at a few months ago on a webnovel discord for getting upset with an…
ytc_UgzrR4g-1…
G
Yep. ChatGPT is designed to lie. I once questioned it avout a fib so much it eve…
ytc_UgxvHPs5W…
G
I wish that ChatGPT would respond with like: "Dude, get over it i'm just program…
ytc_UgwizGq6x…
G
ART, MAKING MOVIES or VIDEOS , IF OU ARE USNG AI THEN IT IS NOT AN ORIGNAL…
ytc_Ugw_2rlCH…
G
Love how they said your art isn't safe from them. Like, just admit you're a thie…
ytc_UgxzfRhUN…
G
The difference between googling something and asking a chatbot is that, when goo…
ytc_Ugz_PJSxm…
G
Ive tried ChatGPT and the conversation interaction isnt anything like this, when…
ytc_UgxB44yr2…
G
That's the problem. With AI, literally nobody is safe from it. Turns out, nude b…
ytc_UgwZ9UEQO…
Comment
"Why are you so insightful? Like seriously, I am concerned about how you could be so much more insightful than I? Am I not as conscious as you?"
If we go by the assumption that consciousness can be measured (if somebody knows more about the world and themselves than another then that would make them more self-aware), then wouldn't an AI that is considerably more intelligent and knowledgeable than us therefore be also conscious or even more so? What if being self-aware or conscious is not something you are born with but something you develop through memory and experience?
youtube
AI Moral Status
2023-08-21T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySJMKX5-RFp4AHo3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3xh9Im_DuJ2JWKaZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwq5VjXGVui02DZ0Xt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEB8PlTgA71QrmtrB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9TPElwM-aF2yKHzt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz1L_UZf2rA0fw1KD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwH7GlJTUoZky2lFtZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy79STfvW7RuXXlyBJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugym0ex0EnoWT3XHzn14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyxGRsDkVxgz2UFTX94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]