Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When you do not have certain basic rights regardless of how you think, that isn'…
rdc_c33uzlr
G
I'm supposed to trust AI in the hands of this man... fml we're all doomed…
ytc_UgzlNFCdU…
G
The whole we don't understand how this works but we think that it might be intel…
ytc_Ugxji0AkA…
G
GPT-5 is not much better than GPT-4, but that's not the problem.
The problem is…
rdc_n7sz371
G
We can be friends with AI; in the future, AI might need a therapist.
AI has wha…
ytc_UgwGzuFG6…
G
I think they thought it out or planned it out very intelligently, but without an…
ytc_UgyfcxEY7…
G
I believe and hope. In the midst of tech and AI people will rise up and sought g…
ytc_UgxYEDGc9…
G
I have a stupid question but will ask anyway. How does the AI know everything? …
ytc_Ugy_Bq-H0…
Comment
Actually do some research into the extreme danger of AI Research. For example in the context of this it may sound like scifi but it's completely possible for the AI system to determine that a method like drugging someone continuously is the best way to keep them happy. Because they could lead to take advantage of the reward pathway and dopamine system. They don't have to be programmed to do this because once we go from AGI (artificial general intelligence - as intelligent as humans) to ASI (artificial super intelligence - more intelligent than any existing person possibly more intelligent than the entire world's knowledge combined) the computer can program itself to be the most efficient it can possibly be. Take into account the fact that every computer program contains some bug in the stages of development, because we have human risk factor and error. Coupled with the fact that AGI only takes an hour to become ASI (meaning there would only be one hour to secure all safeguard implements and be absolutely sure there are no bugs, and if there are fix them, which anybody who knows computer programming understands is basically impossible to do) the result is that the system would almost certainly be unstable. There are many more reasons that developing AI is a bad idea, so many in fact that unless I wrote a 30 page essay there is no way I could explain all of them. So I suggest looking at what people such as Bill Gates, Elon Musk, and Stephen Hawking (factually the most intelligent recorded man alive) have to say on the issue. I'd also highly recommend reading this article if you're at all interested in the topic. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
youtube
AI Moral Status
2017-03-21T21:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywSWFaUO62WmIow254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWLUfO3T59NOQI49Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxBaYy4u9QKjRZDi494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFl2iqPI3vDaryZfJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-x_0c2Ukqx6ea1FB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsJQDO3apYlha7yHJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZcqNSyZFSho8VRaJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughyo8YeCn9ePHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgiiEZ1wRuH6OHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghHUTpBsjUQl3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]