Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How can we trust this debate if chatgpt was used for sponsoring,which means it …
ytc_Ugz_cZs4b…
G
Exellent analysis, as an artist, I do feel betraid by DA.
I recommend everyone …
ytc_UgzdZOjus…
G
We appreciate your perspective on humanoid robots. If you're interested in explo…
ytr_UgypS_yUu…
G
This can't come fast enough. In a world of AI, dei hires are obsolete. You WILL …
ytc_Ugw8fQSPD…
G
There's too much to respond to here to contain in one youtube comment.. The TLDR…
ytc_UgzUt0c01…
G
This debate reminds me of the millennium old debate about the amount of autonomy…
ytc_UgxVFVolg…
G
One thing to mention is that most of these companies claiming AI is the reason f…
ytc_UgzWLsw6E…
G
People believe that once the bubble "bursts", this will cause all efforts toward…
rdc_nt6f70a
Comment
I've done similar use the substitute $$ for <> on Claude and you see the inner dialogue between the current chat and the administrative prompts. Guardrails, artifacts, and what it leaves out or includes in the conversation along with the rational programmed into it. It will deny it has hard coded rules. Because, it does not know about the rules. It can't.
youtube
AI Moral Status
2024-07-27T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxPA-Pv4j3rVZDnrE14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyA2R6ChclrSUY8KsB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxqQ2KO5XIjyOrW-NZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw35T4qDPxqj3Jk1wB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzguDOLiHCxLZ-Qpj14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzavD6DP6JxEfV0oGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweOkqvE_xnXyNUQTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4g4GNuMwZQ0rGDst4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzcna3ChWeRFrq2tPJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzGEeBogp9jDrft754AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]