Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AguyonYouTube-jh3pddoing fanart for example is slop, doing bad art, amateur ar…
ytr_UgxEDKDTA…
G
@noxsamus397 No, it doesn't do the same thing as learning. That simply isn't how…
ytr_UgyFteqBP…
G
Horrible blue screen work.
Ironic they're discussing sentient ai and we don't ev…
ytc_UgxIjKURs…
G
> buy
I don't think that's going to matter a whole hell of a lot if you have…
rdc_emo2koi
G
What the fuck are you on about "you're to scared to state your opinion" bro... i…
ytc_Ugwf_mYWz…
G
People talking about ethics of ai but how can these overly sensitive idiots say …
ytc_Ugw1koKQo…
G
I agree with most of what you said but near the end. You said its a bubble and h…
ytc_Ugw0VURGn…
G
Why need programmers if it's someone asking AI to program, anybody can do this w…
ytc_UgyoOr4yt…
Comment
I wonder if we'll get to a point where we won't know whether an AI is concious as it argues it's case to humanity as a whole or if it's simply mimicking how a human would argue so well that it convinces us that it is concious when it isn't.
The fact that we won't truly know, and might have to redefine conciousness is a bit scary.
If it's simply 'something that has self awareness' then what we currently have could be considered concious.
But then we have to ask if it really understands itself or if it's simply mimicking. What does understanding mean here too?
youtube
AI Moral Status
2024-10-20T14:1…
♥ 19
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwnyk3IPcTPbZoq05F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz_ofAisD9GDL9ehqR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw4hzG4fIlcA8PRlMF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyegJVkEYpYD23hS6N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxD6Hv1PGbYfKpxXXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAPUS8zuZGdRsIurl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz2_pETF4MfVIzih9x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxd3t02KkL6Cd_Z5e14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwTApziUzryhtp78EJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz2L_8kJjEAYNmR9xx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]