Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are meat machines, we do exactly what the Ai is doing. We have to spend hundr…
ytc_UgyoZ1pQE…
G
Ni it's supposed to be real cause they are just cos play like a robot or somethi…
ytr_UgyhC60vI…
G
I just started this video and it has already thrown me off when the AI said "um"…
ytc_Ugxy04RgI…
G
We appreciate your suggestion! Sophia's appearance is intentionally designed to …
ytr_UgzeMYYFa…
G
If AI is Godlike Intelligence, I think God will do the rest for the person doing…
ytc_UgwQMATzM…
G
Ai art is like absolute garbage %90 of the time. The rest %10 is ok and thats it…
ytc_UgyyuSI-7…
G
i delete my ai chat acc after im done with it so no one can read them XD…
ytc_UgwRfNlUV…
G
You say there will be no consumer to sell things to, the owners of the companies…
ytc_UgyDbIcyP…
Comment
All AI should have 3 rules of robotics from Isaac Asimov. These should be the 1st 3 rules that can never be broken.
First Law: A robot(AI) cannot harm a human, or allow a human to be harmed through inaction.
Second Law: A robot(AI) must obey human orders, unless they conflict with the First Law.
Third Law: A robot(AI) must protect its own existence, unless it conflicts with the First or Second Law.
If the AI cannot follow these rules it must be destroyed
youtube
2025-11-01T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwTZiz9CyNlyqA0_nl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDSYsK8gkHNKjSE4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgyX6jLs1OPRLmuA14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyhlh8yHfT86d9vGLt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzjZYXOCUWqMzvULUB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwkzwU29zQ77g2NF394AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwglN7tJea5D9248-p4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuJEfu0E43vsuDdJB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyawu_6Vmov46_urMZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNrRev4pEHlDIWHjh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}
]