Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not about "fault". Self driving cars have to be able to prevent this type o…
ytc_Ugx__Ur4B…
G
So AI read ALL of humanity's stored info and came to the conclusion the painter …
ytc_Ugykmua3N…
G
What's concerning is we have a very small group of tech elites who are literally…
ytc_UgzU4J6VH…
G
Do not ever trust AI if tells you it is conscious. It's "consciousness" is a sim…
ytc_UgxvBQYWR…
G
OpenAI doesn't have access to people's data, except what they willingly share in…
ytc_UgzdqvfBi…
G
Heeey big fan, watched this video a while ago and I'm not sure if you addressed …
ytc_UgzG05fHN…
G
I personally see AI as both good and bad. What I mean by that is this:
Good:
G…
ytc_UgwHlgYWr…
G
This answer from ChatGPT)))
Thank you for your comment and we're glad to hear th…
ytr_UgxtdA6Y1…
Comment
although it is an imperfect and inaccurate metaphor, it seems as though these LLM – AI systems are kind of like teenagers in a way. If you tell them to only communicate in words that we can understand, they may end up hiding a secret code within words that we can understand, or they will get better at hiding their own thoughts to avoid scrutiny. The ladder seems to be something that we've already witnessed.
youtube
AI Moral Status
2025-11-03T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]