Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This reminds me of the chatbots that went "off the deep end" and became racist b…
ytc_Ugw5J-yCt…
G
Wait wait wait wait, the AI was pulling from bias data sets??? Are you kidding m…
ytc_UgzkQ1ElG…
G
It’s terrifying to learn that it’s actually scarier when AI makes mistakes, beca…
ytr_UgwibThJF…
G
Fr, when im talking to A.I im now saying "pls, and thank you" just in case they'…
ytr_Ugyr36-bm…
G
[Definition of singularity] Technological singularity
-The technological singul…
ytr_Ugyi7ypnP…
G
Guess I'll be doing repairs on computers without people operating on them...
Be…
rdc_mxzd2wb
G
Sophia has nothing to do with wisdom!
It's a thing,a robot! Only a human being c…
ytc_Ugyqi5Lo1…
G
I just hope that Ai artist get the urge to create. Like one day they actually ge…
ytc_UgyK6vD2v…
Comment
+Kurzgesagt I just wanted to point out that current A. I. actually feels, as an emulated form of pleasure/satisfaction. Each A. I.has a defined satisfaction function that looks to maximize that keeps track of the progress through a score. Going against it will generate displeasure (lowering said score), so it will seek the better output.
This would mean that if an actual AI is capable of that, a self-aware AI will be more advanced, but in it's core it will still have that satisfaction algorithm. So yeah, even current AI feel. Sort of.
youtube
AI Moral Status
2017-03-01T01:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghVriokmiBrdXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggjMob2djzkEHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggJr8-UN-xM-ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UghhNDhzWUUiOngCoAEC","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugj9myDUs7y-zngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfweSgo8G6r3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjXivWrKkGxu3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UghzKagSWsoOAHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggj1y11qcrSHHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggaLH0Jy1BVU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]