Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think you’ll come to realise that AI is just a flash in the pan and cant overr…
ytc_Ugz2P4HLU…
G
It doesn't need to sound real, it just needs to produce profitable results. 98% …
ytc_UgwMxqiZ-…
G
Ideally, politicians, capitalists and environtslists from. first world doesn't n…
ytc_Ugy2wlVT8…
G
@DanknDerpyGamer There has been several cases of ai generated art being almost i…
ytr_UgwtyLdRg…
G
Sounds to perfect. So AI wont take any slave jobs were people cant get rich at f…
ytc_Ugzm9TR52…
G
That's an interesting perspective! The name Sophia indeed carries a lot of depth…
ytr_UgynuPjM3…
G
I'm going to speculate that we are totally screwed. We can not compete with the …
ytc_UgzbTtK-d…
G
@laurentiuvladutmanea What i'm talking about absolutely is being replaced(cur…
ytr_UgyfKVbQ-…
Comment
AI will never be sentient, not unless we somehow discover how to measure sentience.
AI exists to accomplish a goal, whatever goal its creator has given to it. more specifically, it exists to maximize a parameter or set of parameters it has been told to maximize. these parameters can be as simple or as complex as we want, and we are diving headfirst into the latter. the problem is, it will never do anything past these parameters. an AI told to generate images wont start generating audio unless we tell it to. an Ai whose goal is to predict protein folding wont suddenly start trying to maximize our usage of water. etc, etc.
AI will only ever become sentient if we tell it to, if we somehow manage to discover the parameters of sentience and then tell it to maximize them. until we do that, it wont be sentient. plain and simple.
youtube
AI Moral Status
2023-09-05T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOheumHPpY75Y7wQ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJa9Gkhfs3H8IEeIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzL7uYJosHho7qYX_x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVgvtTIb_wHMXdpY14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxLYaJeuiE839pthe54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwSPLfiY6ylsT2q_-R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZaqMkir-VG-SOGKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxh2LRjdSiVFwnPo2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgygTYWgl9Iz0UNzyhx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8LYz9ax11vCGOb4t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]