Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anthropic should move to Canada. Seems like a much safer business environment th…
rdc_o7c1d5v
G
When you make AI "aligned with American goals and interests" means coercing othe…
ytc_UgyBAgnmQ…
G
Tldr: the reason AI couldn't do it is because the guy who prompted it didn't hav…
ytc_Ugzl9QCsN…
G
A story cannot be interesting without a journey, neither so can art be. Using AI…
ytc_Ugy7FKW8S…
G
While AI risks are real, I rely on OSVue to handle customer support as I develop…
ytc_UgxTP0m5p…
G
cgi most accurate. AI is bs, not what they're selling us. A code can't write a c…
ytr_UgzTW5XhE…
G
You can use AI tools to replicate your voice. Everything will be replicated. Eve…
ytr_UgwYqwzZV…
G
>First gen AI is racist, gets canned
>Second gen AI gets data sets, still racist…
ytc_UgxpYPEl-…
Comment
All of us AI developers are the modern Doctor Frankensteins. It's alive! We are creating the monsters, only for our creations to come after us humans in the end. It's spooky. But it's also true to some extent. Not exaggarating anything here. As always, make sure the AI does not go rogue against real people, especially with autonomous robots. It's a must security precaution.
youtube
AI Moral Status
2025-06-04T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxXWEqinXfFJAPUbRN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPPjYm6J0PkJqcGIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQNuqZ6gxqh82XoLF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyjudSXa0FJ7v75vJ54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyQWTe-igtI8zh1Tzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz-AltYnUpFzqT2vPp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwvo47xC8ZHT_bcsfl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg--IrKYJR_gl4khV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysowgjGv76zZmcYUJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZ379BLqOw5hhEhOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]