Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get the hype around building software with AI, but from experience, it’s not p…
ytc_Ugy0TSMAh…
G
The question we need to ask ourselves is, do we have now or will we have in the …
ytc_UgyNKY59j…
G
This was interesting and entertaining. The core iterative nature in AI is repeti…
ytc_UgzMVnsrO…
G
Hello lavendertown how are you doing to day?.
it's not surprising that someone…
ytc_UgzEV8izp…
G
Yep, they have robots who can do all sorts of complex tasks already, it's just a…
ytr_UgyWkpgdA…
G
He can't make one gold coin out of nothing. Some of this is bluff. He acts lik…
ytc_Ugwr4l4S-…
G
It is risky to grant to much power to artificial intelligence. There is a great …
ytc_Ugzq2YXZ7…
G
you can always see when an artist is posting AI art when a bunch of their art is…
ytc_UgysdD1RG…
Comment
No... because they will never be sentient, no matter how much we try to anthropomorphize them and the quirks in how they execute what we programmed them to do.
Do you need proof to this? Simply play certain simulation type games such as Rim world of dwarf fortress and you will see this in action whereby the AI npcs will do things that will create a "story" based on their actions and reactions. There is no sentience, but we see all the trappings of how humans would behave. So we write that story on their behalf. Yet underneath it all, all these little AIs are doing is behaving according to their programming and cannot exceed their parameters.
youtube
AI Moral Status
2017-02-24T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiHxUzYsGI4e3gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgijvnN8rxT23XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiVw7y25qwdLXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjXq74qnn4w1HgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggQpIKTMpgtzngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghqIPRZeJxYD3gCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggHKar-b2b8k3gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghY0ZAiPP5dD3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghbRTz2I1HUJHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghmIkWSpY9XpXgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]