Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT put it like this: "If an entity cannot assert “I am conscious / sentient…
ytc_UgxH1McRi…
G
Why is it stuttering????? If that doesn't make you chill than you're the AI, lol…
ytc_UgzSF0CAg…
G
@nameless-yd6koaww qt try harder just search about yourself why you being de…
ytr_Ugy9GewVa…
G
It's insane that OpenAI is able to hide behind no accountability. I have saved l…
ytc_UgyV0pW-9…
G
You clearly have no idea how ai actually works….
People already twist and manip…
rdc_m39u7rk
G
Phones are worse than ring cameras. Facial recognition and finger print scanning…
ytc_UgwzWCtsk…
G
Wt people gonna doo in future, i think banning ai is the right decision, fuck Ai…
ytc_UgwVmTPyA…
G
I tried reasoning with an “ai supporter” and I told them it was like tracing and…
ytc_UgzChLXQx…
Comment
Isn't language inherently built upon consciousness, so a chat bot that uses the language, trained upon peoples use of the language, would use phrases such as "im excited" or "im sorry"? What else would it say? Any I am statements wouldn't work. how would you chat with a chatbot that can never refer to themselves, especially in an interview? It's a chatbot trained upon human interactions and algorithms. The whole point is that it talks human like?
youtube
AI Moral Status
2024-12-10T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwk1N3LKcsDnio_NsR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxOYaDmWR4MVI0nJI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTBD59BSSyaTnmEcx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwz4vhHnVjgrRVInZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBufZ7Sxmd_vIQOvR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwH4jC4Ra9WFnlVMql4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxBgz0ihqeuCUK_r3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBxI_59BjfwwnhJ4p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy58OfI11YuiC2KQ_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgySeNEnL1_93jYbOph4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]