Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How much you want to bet that guarding the rich is one job they won't outsource …
ytc_Ugy8KcFAh…
G
So long as the exploitation of conscious beings benefits someone, that someone w…
ytc_UgxI9HwJW…
G
I had the same feeling before and it wasn't because of AI art, grew out of it ev…
ytr_UgyuX8hYl…
G
What happens when you start a song, ai helps you and then you do the fine tunnin…
ytc_UgwOMrGt3…
G
ChatGPT does the same thing with medical literature. ChatGPT will fabricate medi…
ytc_Ugxqu8QHU…
G
People have been developing apps without studying or learning how to program for…
rdc_mju329p
G
OpenAI offers users control over this data usage. You can opt out of having your…
ytc_UgwI01PNA…
G
The fact people have AI to summarize text messages of loved one is so sad, becau…
ytc_Ugy7JH3bN…
Comment
On a fundamentals level, this is great. I think they did a great job placing the foundations where they need to be.
However, we’re already seeing the degradation of these “Level 2 - Limited Risks” with AI sycophancy and Grok’s non-existent guard-rails.
AI is an ever-growing, evolving amoeba - and we need regulations that will adapt. Off the top of the dome - I think a good move would be creating an active department that watchdogs these companies for safety and regulation. Just as every restaurant has a health inspection, I think every AI company in the future will need active eyes on them.
youtube
AI Responsibility
2026-01-20T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzo__5fgdPCIMDiyHx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy86BA7yymFB1piM6t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyL7xNMaIlNrkujg9h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwyuh00bSb3oQJrgCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyEUFsj8DtD0sjvyEJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyv8jrt351NmBJHBiZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz_ibFQjbNIO0PkpiJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwPNfFGJsFKLhZX4Vp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWnvi7Is7vx80miSR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwzs4ZR9qAiSHLdDDl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}
]