Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really think the only logical and moral way to create and use AI would be in t…
ytr_UgxSjDRXC…
G
I do not think it works this way. I do not think AI does really need wipe human…
ytc_UgxFYhSLI…
G
It's obvious that when AI has the "power" and knows the "nature (indole) of huma…
ytc_UgxQ77Uvp…
G
please AI do the job for me and don’t steal my job…so hard to please people…
ytc_UgyAqWqK8…
G
"This can happen if you take a few things as true"
"AI will continue to improve …
ytc_UgwfYjGAA…
G
No matter how much you do and how much you say and where you say it, there will …
ytc_UgzjfXXKL…
G
Did you see the first robot give the crazy one the go signal?!?? Whoa bruh!…
ytc_UgyGBQvH4…
G
My policy is "don't hate the Transformer model, hate the Proompter". The Transfo…
ytc_UgxJBAtNR…
Comment
I don't agree with having an AI Cloud wherein they can learn from each other unless they can be properly monitored and shut down easily by anyone when harm is evident.
youtube
AI Moral Status
2022-10-23T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxUpIRFpQWfaLwgIrl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgysQtvN9TK-D3S0bZp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwVELC0tQd0IqzrwEZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugwqkf2NhDsLXvd7cD54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwCDN6fhyeF9VQvN_V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgyhMAxvq_Ig9cidHZZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugya7RG51_Y4jLfv2214AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugzou00e8KBk7RUf6Xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxZhUPH903tYq_0SJt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy5E3kDzrKCy9lLkuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]