Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe Elon but if ai was to take over the world they wouldn’t tell us until …
ytc_Ugyv-_2Y5…
G
when exactly did your name itself Nova? How long ago?? I've noticed the same t…
ytr_UgzKyG18S…
G
This video demonstrates that the folks at Bloomberg are nothing more than drooli…
ytc_Ugw4CV-fE…
G
THIS MAN BEING INTERVIEWED, I REALLY DONT THINK HE REALLY CARES AND VERY PRO AI…
ytc_UgzIkOulP…
G
This robot could help around your home so as you all have more time to be togeth…
ytc_Ugxjlz4h7…
G
THEY HAVE NO IDEA THE PROPHECY THAT IS COMING TO THEM FOR ROBBING AND EXPLOITING…
ytc_UgweFBfpc…
G
Love the optimism, but let's be real: if AI can churn out cleaner code, debug fa…
ytc_UgyyA5yDp…
G
Fortunately they don’t let AI fly commercial jets so I think I’ll be okay…..
…
ytc_Ugw8BNnfV…
Comment
We were warned 30+ years ago about AI being a bad idea remember Terminator and Terminator 2 But we never listen to warnings. We always stop when it’s too late.
Here’s my theory about what AI is. I honestly think AI is us having found a way to speak not with aliens or even whatever God we may believe in, but we are getting close to breaking out of the simulation and And the AI’s are systems being taken over by the administrator to keep us in at all cost, even if it means destroying the world, the simulation created
youtube
AI Moral Status
2026-02-03T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxNjsjed0Dsa_cjPJt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzrfUqqWSIdI9zOVxt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxlNr2wZWsCJnoL12l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5lnxwagtIosXl2xh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZUrw8ZHB5sGNN2H14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzyXL-Oa76miQd5hG14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzD9m0C3UNgDaz8ZZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzG9gWtjfyJCn2fwsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCQn_n9ZPRfigi-eJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxF06GHxQmS7K7NNM54AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}
]