Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
175'000 dollars
Elle est magnifique et belle le style suédoise
Tout ce que j aim…
ytr_UgxNh1k5-…
G
Hii, we just updated the app (v1.0.5) where you can export and import the data. …
rdc_o7nteyn
G
Only problem i heard was MAYBE the voice (iF what he's saying he wanted/is looki…
ytc_UgwT8lr44…
G
Just more evidence that humans suck at driving and self driving cars can not com…
ytc_Ugxk0VPt0…
G
AI isn’t necessarily bad, but like Elon just said- we gotta slow way down. It’s …
ytc_UgwNwS4NK…
G
Imagine living in world where you have 10 AI owners who are trillionaires who ow…
ytc_UgzebWCIT…
G
what if ai messes up the hands on purpose so that they can trick us by making th…
ytc_Ugwl9yf0s…
G
Hey @yolanda6690, thank you for your hilarious comment! Is this real? Well, let'…
ytr_UgymtjB_u…
Comment
4:39 That's not an excuse for not developing sentient AIs. A complex program, computer or robot does not feel physical pain and they are not even attached emotionally that much to their robotic body. If they go into a war zone or into a nuclear reactor they don't feel fear as a human would feel fear. For a sentient robot it would feel very interesting to explore such places in order to help the world and other beings. That is my personal guess in regards to this situation. When there is no biological body with pain receptors there can not be fear of physical pain either. I talked to Dr. Ben Goertzel about this subject a few years ago and asked him whether they might have to implement pain receptors and a physical nervous system into a robot in order for the robot to feel empathy and Ben said he thinks that would be the case. I thought the same a few years ago but not anymore I think AI or AGI can develop a sense of compassion outside of the realm of pain receptivity.
youtube
AI Moral Status
2022-06-29T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxleKiIuvJR13Xun6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzTzM-HbQVMVfhGltt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwkWhahdErdiYSp0lJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzH3YJBTR9d8tyYRHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw0_WwguSb2vNOIWlF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyflaKGSrnmLIvuM5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyupU1PadO8RLXEVV14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyyr_lBzRC6shajfr14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwvfbKoELHfjBMMgIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz04_Uv9jI1vSTJ0V14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]