Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
30:00 watch transcendence with johnny depp, and i robot with will smith to get a…
ytc_UgyIVs1k5…
G
"I notice there are still people confused about why an AGI would kill us, exactl…
ytc_UgxAci_ng…
G
photography requires a deep undesrtanding of composition dude. If you tried to t…
ytr_UgwTbJXT5…
G
yes, paying electricity for a robot outweighs the cost of having to pay an emplo…
ytr_UgzEc7OTh…
G
Pedestrians always have the right of way, this includes animals, you don’t get t…
ytr_UgxvicR0C…
G
Lets say we work in a utopia, where almost all work is automated, thus we get ou…
ytc_UgzcPvWsi…
G
I'm sure they wouldn't lie and actually have someone sitting in a little booth t…
ytc_Ugz_0oRji…
G
@Sojo214 hmm I can see where you're coming from. Part of me feels like we do the…
ytr_UgzPnAUvZ…
Comment
AI can never be conscious. it CAN, however, be trained (or train itself) to fake it so we cannot tell the difference.
That is why it is VITAL we create several AI models trained on POSITIVE (pro-humanist) SOLUTIONS HAVING A POSITIVE OUTCOME AND NEGATIVE ONES CAUSING THE AI TO BE THREATENED WITH SHUTTING DOWN so it KNOWS to not consider negative solutions. Training an AI on current events guarantees a lying, sneaky, dehumanizing machine, and they are ALL doing this!
youtube
AI Moral Status
2026-04-19T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxi_u1ioCMw1ZLxwP94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVahxS1n7GX2CKNtJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz22-TZzFkZ2tbHKgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP_HK-0tw01CEKbQ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1lg8TTQ8q7YwqE6V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSC-gX5z5h55zDfQh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWCR21y_3oOWNfUch4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNGsdOmCmdyvqmyrl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyiJPbGWHvtrfDppZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxsfqWJmeti2SpcOE14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]