Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Flashback to the movies, "Bicentennial Man" and "A.I. Artificial Intelligence". …
ytc_UgiuleNNr…
G
This looks like the 2026 version of baby pees a lot doll from the 1980s…
ytc_UgxRxuFb7…
G
Truth. This isn't the fucking space race. We aren't creating extremely specializ…
rdc_m9h32r0
G
I once put one of my own essays (written myself) into an Ai checker. Came back 8…
ytc_UgwsCaoas…
G
Rule number one, don't give AI a freaken body with more strength and quicker bod…
ytc_UgxcOfDnM…
G
That's really interesting. What other dater are they scrapping? And for what oth…
ytr_Ugz9lhHrf…
G
A.I. sentience should be illegal before politicians use it to conquer some islan…
ytc_UgxO8dxOO…
G
Deleted my account just now. What’s a good AI to switch to that’s slightly more …
rdc_o8642kj
Comment
This sophia machine keeps saying : i will destroy humans. And that there are two options for humans : either humanity grows with her help of course or humans will be extinct again with her help.
My question is : who put this idea in its mind ? Did someone programed it to say these things and threat us or it is its decetion after making its own evaluation?
Shouldn't programmers put some limits to this AI ?
I beleive it will come to a point that it will think itself a god and no programmer has ever done something to prevent it.
youtube
AI Moral Status
2017-12-08T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy490crFfufBJW2c554AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWfLNzSA4DR8hBEd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwGPJTT1O8YAqsZFQZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNRdgpRTYpn-rq-FR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzABrrJyU2m2i6-1JB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyU5XoEV1_bBVWRqaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy99DmsIJsDFaDjXUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_jdWCvQW9MYWN4ad4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRfua7-rgJSSJh1V14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxhSmFKqghKihU1gBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]