Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don’t cheat
What’s the point if I get in trouble when I’m not even doing it
No…
ytc_Ugw8Lmf4T…
G
(I mean that artificial intelligence creates huge risks in the communication sec…
ytr_UgwhtOyVh…
G
Be extremely careful with this assessment. Open AI has stepped in to do the Pent…
ytc_UgwSFT-iI…
G
I love Bernie but this is one take that's absolutely wrong. That's not how "AI" …
ytc_UgzX2GTI6…
G
What's c ai, I don't get it, is it some type of therapy app, or something, I'm i…
ytr_Ugy_ILk_W…
G
People should always distinguish between GPT-4 , which costs money to use and GP…
ytc_UgxW4QjN5…
G
It's already happening. Here are some stats. 1- The broader white-collar job mar…
ytc_Ugw-mlyRg…
G
The problem with that hypothesis is that it was created by human minds. Our coll…
ytc_Ugy-d5fKj…
Comment
I don't speak english well, but if you're interested "artificial intelligence" is actually not artificial and is a natural result of human development. We have the natural drive to innovate and push the boundaries of existence. To put it another way, we are bound to play God as we get more resources and knowledge. The power of creation is too enticing to not use, despite the possibility of our demise as the dominant race at this moment in time. But the thing is, will these next-gen intelligences have the same drive in their nature? To play God, or just ensuring the world stays as efficient as possible?
youtube
AI Moral Status
2025-06-05T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz8Bj7SPdC4Je7NMjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2anC7qBNFlKinPeJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgysXcEHeNXuA6h9mhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlcWe5sNrEeaBM2ut4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugziql605JeLuPeUohl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3uK-SuJayJDpYwS14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK8GvBylT51hLe5XZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqvVkyA2eWLEsvKxJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCFascAELggc8RflF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzstRO6hzQqwPBzgGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]