Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you actually understand AI properly, in an academic and mathematical way, the…
ytr_Ugw5zfNQS…
G
Or maybe, no AI at all? We dont need this shit and it brings only bad things to …
ytc_UgxleD8Rg…
G
The guy robot tells us his intentions, then the girl robot acts like a girl tryi…
ytc_UgzPnwdhW…
G
AI shouldn't be used for script writing, even if it takes one person to make sur…
ytc_UgxcGXTwU…
G
ethics. compassion. empathy. good and evil. selfishness versus selflessness. sac…
ytc_Ugz7xCZMA…
G
You cannot guarantee AI will be safe. There will always be rogue agents or rogue…
ytc_UgxYExPBZ…
G
Hans robot is openly talking about singularity is it true or they are made to sa…
ytc_UgzSWwr0x…
G
Not a bad idea.
Enterprise IT at large organizations are moving rapidly towards…
rdc_nzl6mfd
Comment
I as a random expert of opinions on the internet, think the AI bubble will burst. And it should end its focus on LLMs and start from square one with programming something we actually can understand more and ALIGNMENT should be priority #1!! Growing means unpredictable outcomes and thats unacceptable with something so powerful.
youtube
AI Moral Status
2025-12-21T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwjzuIo15T--koOyct4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKrvLxg-Qvmtv6leZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxcd7-mS90SmRurE054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHEEl6OaUz4G3z3Yh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCG3Lwm8c8_I7C9Ap4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzbTvRsSdbf7Jyed0F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwl6t91PKR6W_ZxbyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuM_nyb8fUEwxS_U54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoXXQudd-i74U2cpN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz1tvsXcB7b2NLnajV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]