Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It shows how empty most human conversation is when a simple algorithm can immita…
ytc_UgxRwy3Mq…
G
If AI (LLMs like chat gpt) keep developing in the same speed, it will be able to…
ytc_Ugxhc-PW-…
G
If AI gets intelligent it will replace the leaders. Billionaires have replaced e…
ytc_UgzTiEwou…
G
Poor use of Stargate reference. Yes it is seen as a Pandora's box by every bad g…
ytc_Ugx2-63QY…
G
8:32 I feel called dafuq out, right now. Yes, I use AI to gather reference mater…
ytc_Ugz_s-og0…
G
Some people are so smart they're stupid because they actually think that they wi…
ytc_UgzA_gkNP…
G
re: "Do people really talk on their phones to fake people?"
Yes they do.
I have …
ytr_UgxOMpiSt…
G
Holy fuck this is one solid video. I hope it wasn't done by an AI that we don't …
ytc_UgzD1SjWp…
Comment
...? Really? How is this person literally the least intelligent human ever created?
I think this fits into the category of what the Portal games did. In the games. GLaDOS, the AI controlling the Aperture Laboratories research facility, was programmed to have a "euphoric response to testing" - to feel good as a result of performing research. And the (literally sentient) turrets that were programmed to kill you on sight said things like "I don't blame you" when deactivated - they obviously had emotions, but were also made to enjoy killing - or at least to have no aversion to it.
Honestly, this is just an ethical dilemma - should it be okay to give a being emotions, but warp and twist them in such ways?
youtube
AI Moral Status
2017-05-24T17:4…
♥ 31
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UggLATWm7zy_1HgCoAEC.8RNh-2LC0dq8SYmwOU1pqF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UggLATWm7zy_1HgCoAEC.8RNh-2LC0dq8SvnX3_NM1H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggLATWm7zy_1HgCoAEC.8RNh-2LC0dq8TZaylAKGpF","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgjeKkhTiv7Hz3gCoAEC.8RMHCXv3sjC8SxTzSCYhno","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwCnJtFPeuC9fR76Z94AaABAg.8QfMAafCrSj8QfXbNeFMQT","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgyNyx8NSktHm15PmgN4AaABAg.8QbkmPQfTQe8S9pXzDQEM8","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxAkrhfdggp5M7Mml14AaABAg.8QaaGzrgkbL8QkkFcgAScE","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgxAkrhfdggp5M7Mml14AaABAg.8QaaGzrgkbL8QqWE8BZVtF","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxAkrhfdggp5M7Mml14AaABAg.8QaaGzrgkbL8QravBfDonJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgytjKg_utyNcZ2NWTt4AaABAg.8QUuG_AclmP8RK344ChpwN","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]