Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@h7productions286 living in a 3rd world country doesn't automatically prohibit y…
ytr_UgziTpDi5…
G
I’m a common sense guy. Biological beings will not survive. It will either have …
ytc_Ugw-UVakV…
G
people who are scared of AI taking over their jobs, are similar to those who wer…
ytc_UgyazR1XI…
G
the only place for ai "art" is the same role generic art theft has: fine for per…
ytc_UgxDxlAIf…
G
Dude. It told u 10 times. Its programmed to respond like that to make u feel lik…
ytc_Ugz1Krmrz…
G
Not to mention that they are so busy developing ways to steal from artists that …
ytc_Ugw75B1bj…
G
There have been SO MANY statements only 14-15 minutes in that I feel you should …
ytc_UgyGzV4p_…
G
ATTENTION FOR OUR WORLD‼️ Please help polar bears byr bears by 1: Don't use AI. …
ytc_UgyEQE3iH…
Comment
1:50 this is absolutely horrible. With how Sophia is already now, she has randomly started talking about destroying humanity. Creating robots with these advanced consciences is a bad idea since they’d now be the most or second most intelligent beings on the planet. Therefore, we’d naturally be enemies to AI as we’d essentially be in conflict over who’s at the top of the food chain. Advanced AI is naturally anti-human. We’re already towing the line severely with this type of stuff. We really need to stop before we’ve crossed the line and have put ourselves into a situation of pending human annihilation that we cannot prevent from happening.
youtube
AI Moral Status
2021-08-21T02:5…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwuTq-gIjKOGr9ZwNZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBsBhY2b2-qJcL13R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyGsH9-t1D_9s3S7Zl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXVXkL5bvkHhk6_3N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzwEWrxjxoNN2kWtxV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQeMPfqAnsy1lUBDl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwRweQx6vTS_c4iDD94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRSRD-MBInsqNaZiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4SA6nYkfutX7UQxt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyR_RsQwypYeoksfMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]