Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmanea If you're allowed to learn from pre-existing works then so…
ytr_UgwD6OkLZ…
G
Hi I used to love character ai until I got addicted and a family member had to d…
ytc_UgwSFhOos…
G
Here is an idea, revolutionary i know i know... what if, now bare with me... wha…
ytc_Ugz1e29XY…
G
its really not a grand thing , this is just copying whatever data it had been fe…
ytc_UgyeNcbn6…
G
I would like to know how scripted these robots are. How do we know a great deal …
ytc_Ugxh1Otlu…
G
Haven't billionaires already destroyed enough of the world? Do they really need …
ytc_UgyKiC4B5…
G
LLMs are still not AGI though, it's just another program without morals or self …
ytc_UgxDsiqyb…
G
I've been an AI researcher since 2016 and I agree with you. We will also merge w…
ytc_UgyFo-bvA…
Comment
A good example is the game, Detroit: Become Human. when AI become self conscious to the point of being almost human like they will need some form of rights, maybe different rights, but some none the less. but that is assuming they become almost human. the problem with AI having feelings we couldn't program them with true feelings, why? because feelings are tied up in our consciousness, which some would debate it is a soul, and other simply neural chemical responses. whatever the case may be, they only "feelings" would be preprogrammed responses. take siri for example, if I "insult" siri she will say "ouch" or "that wasn't nice" because she has feelings? no, if I say the words "hey siri, you suck!" she will search her database not too differently than this, inquiry/%you_suck%/cmd_line.624//run (yes ik that's not programming language). and when we do get AI that is self conscious I believe it will be purpose built tech, and not your refrigerator. but as the old adage goes, "We'll cross that bridge when we get there."
youtube
AI Moral Status
2017-02-24T16:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggJIup0iIlZVXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugiqorz5t1QhRHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UghZ5Le5QNo9W3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj2YPylz7gmH3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiIQ5CNwZV0VXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggW5A_hvTuZv3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj0GWYELnqn_HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugi37YvVMkNA3ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjFDOQXOgm_-HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjVqIuTCm8kfngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]