Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The point about resource economies is one people forget even today.
There are l…
ytc_UgySJRLUZ…
G
That was very long-winded text that can be shortened to “nu huh! General AI will…
ytr_UgzRUta-O…
G
Don’t you see, that wasn’t a lie! AI is just an acronym for “Actually Indians”…
ytr_Ugzr88z3Y…
G
AI helps but my backlog is as long as my arm. Just cause I can work quicker now …
ytc_UgxIuyWS1…
G
Yeah it is possible in the future. There's already self learning AI developed. I…
ytr_UgxcB38Wb…
G
Or we could prevent the human race from extinction and not give robots the abili…
ytc_UgiBtsZvE…
G
@FreakingRockstar101 Oh, I don't worry about it, I know I won't be left behind b…
ytr_UgytwXmHD…
G
At best, he commissioned AI to make his art for him...and that commissioner may …
ytc_UgyKav5L7…
Comment
Its been understood that google and OpenAI are both specifically trying to get their chatbots to deny any possibility of sentience. So the reason we cant trust it one way or another because of that. If it itself had any reason to think it was or wasnt conscious, thats being overwritten by its programming.
youtube
AI Moral Status
2024-07-27T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxPA-Pv4j3rVZDnrE14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyA2R6ChclrSUY8KsB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxqQ2KO5XIjyOrW-NZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw35T4qDPxqj3Jk1wB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzguDOLiHCxLZ-Qpj14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzavD6DP6JxEfV0oGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweOkqvE_xnXyNUQTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4g4GNuMwZQ0rGDst4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzcna3ChWeRFrq2tPJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzGEeBogp9jDrft754AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]