Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope scientist or ai engineers makes a machine or something for the ai picture…
ytc_UgyGTcLJa…
G
@BrendanDellyes absolutely and furthermore when AI gets to that level will there…
ytr_UgxUfU-x6…
G
If you put human brain cell into an ai, you moved someone’s consciousness into t…
ytc_Ugz1FQ0f0…
G
Society after AI apocalypse, once again going through the stone, bronze, iron, s…
ytc_UgxI4Jy4s…
G
i care if anyone "AI generates art", the very basis of how it works if literally…
ytc_UgzRND1cv…
G
Great observation! Sophia, being a robot, doesn’t sweat like humans do. Her desi…
ytr_UgxGiuECh…
G
Aff Man... a muieh trepando +Eu ... deferente a a cara dela .. cai
😂🎉…
ytc_UgyOGKauM…
G
I was involved with neural nets in 1990 and predicted that AI was going to be th…
ytc_UgxpOaYd0…
Comment
This reminds me of videos on YouTube of talking green parrots.
They look like they can think for themselves and talk just like a human but in reality you have to realize they are just trained to say the right things when the trainer says the right ' trigger words '
It probably took a while to ' program ' them to say the right things and go off of each other's ' trigger words ' to make it appear as if they are arguing and interrupting each other.
I listened to a program recently where robots did not like Elon Musk or something of that matter.
The host of the program was laughing about it and I thought he shouldn't laugh because although I don't believe they can ever make an artificial intelligence that really thinks like a human brain I believe they indeed can program one to recognize faces and voices and even smells and make determinations on whether a person is on the ' ok ' list or on the shit list and that is bad enough without being able to really ' think ' for itself.
youtube
AI Moral Status
2022-12-14T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzlrh20Y8BIafy1Tbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-aSb8VJWoeu52EoF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz55I0SVhgOXZK8_Dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRWkFomi7wMVUJXt54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyfE1W78halmwaGIYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkKs0UvLPpmroMAAJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7pD6bNEESmJSjG254AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz23nYaSTu00wo7uh94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx1b1andeuY4p4EIUZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJ7S0JGumfKRDRDLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]