Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Girl ai has to stop like half of the time people talk trash about ai photos…
ytc_Ugzmyqezf…
G
People don’t understand or appreciate LLM will turn into AGI not by magic but by…
ytc_Ugw3zKcKg…
G
Who did the creator steal from? Everyone and everything that came before him or…
ytr_Ugwhd1s0r…
G
Dude got too attached and friendly with the AI that it thought it was sentient. …
ytc_Ugxrln4xA…
G
@ShawnLangfordI do agree that we might reach that point for juniors in that ti…
ytr_UgxmvsgSj…
G
Mock it now, but the technology is only going to get better with each iteration.…
ytc_UgwcxDer2…
G
What will happen to the hundreds of millions of Americans who will lost their jo…
ytc_UgxQ3e_yw…
G
And the so called godfather of ai is laughingly saying become a plumber. You sic…
ytc_Ugx1V5B_s…
Comment
+Bolby Ballinger, robots might not appear to have limits, but they do. We have biological limitations, they have energy and space constraints. They also have reliability issues and the inability to repair themselves. Also, I do not believe it will be upset at not having received payment in the same way a slave is not upset that they aren't being paid (they probably care more about their freedom). At this point, it's impossible to say what an AI will care about as we can't anticipate which AI will develop and how. For example, you can program a set of functions to learn things such as how to walk with "falling = bad" and "up=good". Eventually the robot will teach itself to walk through trial and error. There are videos of this on YouTube already. But this is a very limited structure to begin with. Similarly, cats and dogs have very advanced intelligence. However, they don't seek money, revenge on their masters, or any of that stuff. So it is possible, within the right framework, that an AI could develop that would not have a negative connotation.
youtube
AI Moral Status
2016-11-05T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugico5LpRpGANngCoAEC.8L1QFU5Bk6v8L3pUG3RhjH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugj-Pbakn7exB3gCoAEC.8KzSQwZh2bJ8LK_hlDTWNP","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgiwLipMoGnvzHgCoAEC.8KvqafHEuQZ8KyfxtleBu2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UggGWRbW-RsxW3gCoAEC.8KuuexCbPMr8L4mDTg_I6G","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgiulyL63POSLXgCoAEC.8KuniCG3liy8L67UbJ2zfh","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiEZUx0Er76jXgCoAEC.8KnkpFrh0lq8KrLhmxuZkP","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgiEZUx0Er76jXgCoAEC.8KnkpFrh0lq8Kt5YMADtYz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiEZUx0Er76jXgCoAEC.8KnkpFrh0lq8Kt6O2q787C","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiEZUx0Er76jXgCoAEC.8KnkpFrh0lq8Kue1Nma3eI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiS952mNV0w2XgCoAEC.8Khyerca2BV8ME9Y3YkNeg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]