Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After watching Hank Green talk about how little control these companies actually…
rdc_nt8ycal
G
I haven't seen any AI art in my style but I've been sharing it for 8 years on so…
ytc_UgwqH0W4q…
G
Elon Will build A.I. that destroys all the other A.I. Tony Stark took out A.I. s…
ytc_UgytYIo6w…
G
AI - BLAH BLAH BLAH You need to invest in me, give me more more money please The…
ytc_UgyRMf7vv…
G
What's worse is that real artists can usually recognize ai art since we are fami…
ytc_UgxNC0M28…
G
If a robot uses machine learning with a sufficiently large dataset to determine …
rdc_h4o5p8d
G
2:33 maybe the problem is, is that AI is created by humans... for a number of re…
ytc_Ugxn1faaD…
G
I don't get how this scenario is plausible. Wouldn't a self driving car be progr…
ytc_UgiaKBAMw…
Comment
I don't actually think it is possible to make an AI truly sentient. I mean, how could you even program that into an AI if you wanted to? Sure, you could give it receptors that can detect certain feelings and respond appropriately, but these are simply simulated responses based upon input criteria.
I'm not saying we can NEVER understand or create sentience, but if we do I think it will be through some kind of biological process. I think the whole "if we make a computer with such advanced AI then it is sentient" argument is fundamentally flawed. You can't program something to be sentient. Just my opinion of course, but I think that muddling advanced AI and actual sentience is problematic.
youtube
AI Moral Status
2017-03-01T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghVriokmiBrdXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggjMob2djzkEHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggJr8-UN-xM-ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UghhNDhzWUUiOngCoAEC","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugj9myDUs7y-zngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfweSgo8G6r3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjXivWrKkGxu3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UghzKagSWsoOAHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggj1y11qcrSHHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggaLH0Jy1BVU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]