Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The robot must have said in hia mind that I should not kill this person cuz came…
ytc_UgytWyMYy…
G
Hi Divya, you got the right answer. Kudos.
The contest is over and winners have …
ytr_UgwdLHCKh…
G
I’m laughing so hard right now. The nightmare scenario is being played out live…
ytc_UgyLQ7XS1…
G
I'm just gobsmacked that someone is dumb enough to get medical advice from an ai…
ytc_Ugw_8zRms…
G
XAI is Generating 500mw of power already and is planning to scale it to 1GW and …
ytr_Ugyad_pze…
G
This is just the beginning. I am sure at some point, a person would commit a mur…
ytc_UgxMnqpX4…
G
It's the socialism concept. 😂 Universal income is basically socialism 😂 USSR 😂😂 …
ytc_Ugy1EIj-_…
G
Not only is it getting harder to tell the differences, but AI companies apparent…
ytc_UgxMsCkZv…
Comment
It is absolutly clear and logic that in the exact moment when AI becomes equal to our brain, nothing would stop it from becoming wayyyy smarter than us in a sec, and independed, and having its own conception of life- wether we agree with it or not. The question is: is there an international group of sientists that supervise this and making any rules? What is that group name? I seriously wants to know that.. looks basic to me.. how come nobody freaks out from that idea?
youtube
AI Moral Status
2017-11-15T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgylyI8O8zxtQr7mRAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhdE-gIwaum8V8PeZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9HbGXeaRogkn-RIZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybbrG6ZgpiH_xeDXd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZKbcH63-V2jLzg3Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5JMaKRAi4OKs93xF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtYv6DeqIZMW7SCq94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyySGncDY9uizoaxcJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6TJz_lYDYOH8XjYl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySFriW0yhiBXwaSuV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]