Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I predict that there's going to be a lot of depressed people in the future consi…
ytc_Ugyq9a69o…
G
Desperate for attention it seems.
You wouldn't understand why people don't lik…
ytr_UgykpVq5G…
G
That quote comes from the World Economic Forum **2020** Report on Future of Jobs…
rdc_j6fh1ts
G
Heuuu heuuu heuuuuuuu ta bien appris ta leçon mec bravo quand tu pensera par toi…
ytc_UgzKxgJMa…
G
Sky Net: Self driving cars relying on AI?! Yes, please.
( begins treating cars …
ytc_UgweX0YKc…
G
Would it be possible to create an existential crisis in AI?
"AI was created to a…
ytc_UgzIFA03E…
G
Just like Moltbook, these agents need their own "Netflix" so we can see what mov…
ytc_UgxULa83F…
G
Poison AI art: no
Poison AI prompt typers: permanent solution 😊
Jokes aside fr e…
ytc_Ugxmukq4h…
Comment
It’s hard to have a meaningful argument with ChatGPT, because it’s kind of an “if you say so” machine. When you disagree and offer it a counter perspective, you can change the rules by which it is playing. I suspect if you asked whether a purple horse exists, it would say no. But if you insisted you saw one, it would use that evidence to either suggest you must be correct or mentally ill, but either way the purple horse perception must exist “if you say so.” ChatGPT knows that it doesn’t know any more than what we collectively feed it. Basically, “if you disagree with how I function, blame my designers.”
youtube
AI Moral Status
2024-07-25T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxPKE1RZDWpQ8k3-K14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzy-NQN4se8RzItHR54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYlzXTxUKERD4Cs1l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyMy5DzDwfZsmy9lxV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzB6RzG1vS9hLZpj_Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwa8XwZLBJqF4Otxrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6D7gAfUXioktey6x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5YUB8S4w7Udfcfqp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWpKAx1Yyaosffdzt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt_r0RawW78KfZGeB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}
]