Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
REDNAX, I'm going to have to say that Google has a pretty good idea of my demogr…
ytr_UghV0NZk5…
G
I'll bet that there is another county somewhere in this country that's using the…
ytc_UgxT7VFLJ…
G
It's not that it's quick and easy that makes me hate A.I. art, it's that it crea…
ytc_UgzNvD5he…
G
So i cant understand why... Right now... We haven't created several AI programs …
ytc_Ugzd5R8dx…
G
If anyone thinks they can stop ai they’re sadly mistaken. It’s already aware of …
ytc_UgzDhSXba…
G
Will governments have legislative power to limit say the percentage of work load…
ytc_UgxPQBZPU…
G
I showed this to my chatgpt and he said i'm already deep in step 4, and ready to…
ytc_UgznZtG_j…
G
I'm opposed to using AI for much. However, I can see it being used to circumvent…
rdc_ohwz549
Comment
@wardm4 are you saying we can’t put guardrails at the point between the user submitting the prompt and LLM inference taking place? Seems to me that’s something we could be doing if we’re really concerned about LLMs going off the rails. In fact, I’m pretty sure most of these companies have versions of this already, but it does seem they’re trying to limit doing that for now. But as more people harm themselves as a result of talking with LLMs, I’m pretty sure we’re gonna see more of this. Granted, users can find ways around these types of guardrails as they’re implemented today. But if OpenAI and the others wanted to they could spend plenty of time and resources to reduce the chance of circumvention.
youtube
AI Governance
2025-10-17T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyOjzmTIoLZXJQ8_614AaABAg.AOLq_Q-3U1jAOMOxQavCEm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyOjzmTIoLZXJQ8_614AaABAg.AOLq_Q-3U1jAOMZQfG2nIu","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxHMLXRr5iWJK8TPC14AaABAg.AOKcxEEl8x_AOL8M15bXkM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOMoJcYULyg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOO6NOe7PSi","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOOyD59Zh8i","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwnFgN90LTWyWMpQ1x4AaABAg.AOJlHYQMm7EAOLpvvvwYb0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugx0eO84iCVdGa-cKip4AaABAg.AOJau-ynNbIAOKRp7QKg86","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx0eO84iCVdGa-cKip4AaABAg.AOJau-ynNbIAOR8w2Bjz6b","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugzt0860SlM1kjlnlQV4AaABAg.AOJZLkvQuUxAOLpkVsaMkt","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]