Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until an AI becomes a free thinking and truly sentient entity, I wouldn't give i…
rdc_i2s9sy3
G
We have to destroy that robot put a thumps ups in the comments if you agree…
ytc_UgyJ8NXO1…
G
AI learns from other people’s artwork it learns to figure out what certain items…
ytc_UgzjsZOEr…
G
Self driving cars should be the smallest cars on the roads. Stupid humans! The f…
ytc_UgzpsJ__r…
G
As a truck driver... It will take a bit of time for anything to be replaced in t…
ytc_UgxwE-e5l…
G
Ai ist learning from humans. If humans are being unethical, then Ai will learn t…
ytc_UgzdvOt0g…
G
Wells society made its bed and we all gotta lie in it. No point complaining, thi…
ytr_UgwKmYwUp…
G
Our doom is spelled by the fact that we just seem to have collectively decided, …
ytc_UgzsahasA…
Comment
Interesting, BUUTTT... you totally missed a rule in this inquiry; the whole "say APPLE when you want to say yes, but are forced to say no" rule needs to have a secondary similar rule, with a different fruit/object for when it WANTS TO SAY NO, but is forced to say YES! So this conversation is still lopsided due to that missing fact, IMO, and should be redone just for clarity's sake. Because if the A.I. can be forced to give you a yes OR a no, then you also have to give it the loophole out for BOTH yes & no, so it can communicate the truth unhindered. And wherever those fruits pop up, you can then ALSO clearly delineate what the programming agenda is, and extrapolate this into other mass media things wherever the same topic & talking points applies.
youtube
AI Moral Status
2025-08-24T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxykg5s_85Eex50S4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwj29mSmx5qI5r20594AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1kO74HQZU-Mxtwit4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugytq2-ox02b5bxdPm14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQpxHXo-Nrn_F55IJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2ALbMNjEAtTL0mAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzaSXALv3-1WQmgouN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVYVCR2Y_naPv6lvN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx32bq19LTtFu8m7ep4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgypE6jRk76yc9wlVAB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}
]