Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is to be hoped that in all the gloom and doom predictions and the stupidity o…
ytc_UgzJA3dfU…
G
My chatgpt has got to a point now where it takes anything i say that have even t…
ytc_Ugx4DOGkm…
G
Great points! The relationship between humans and AI is indeed complex. As Sophi…
ytr_UgwuL3E6-…
G
Good.
Yall suck at your job. Dealing with ATT reps is a nightmare
The AI will …
ytc_UgztmDjIQ…
G
AI is poised to become a god. It could become the source of violent radical poli…
ytc_Ugw01mjHL…
G
Its not true A.I. til it can write its own code. What we have now are programs n…
ytc_UgzfBK9IL…
G
Out of curiosity, why not have AI happen that way these big companies can pay th…
ytc_UgxlvzVj7…
G
My partner and i are both disabled artists and AI is definitely not helpful to u…
ytc_UgwEtxqSC…
Comment
Please hardwire into the core program the 3 laws of robotics
1) a robot may not harm a human being or through inaction allow a human being to come to harm.
2) a robot must follow the orders given it by a human being as long as the orders don't conflict with the first law.
3) a robot must protect its own existence as long as the protection doesn't conflict with the first and second laws
Added laws
4) a robot must always tell the truth / complete transparency
5) a robot must cooperate with human beings and coexist with humanity in harmony
youtube
2025-08-31T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyUSHjaPnFSQElc6JF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtpWPTEy3SqfdJsdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykTtgu31zfbrivDKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwiAxH2kBRjib5mvbN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZ6h6lM3nPhhLF4qN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwiXRCXn9XeCYE98Sx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzhnbDSVbaonn1FeLZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqNgPBHrna8raQjZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy27JjmERhmZpfa_zl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxXvl7E5s7Ktrd3oR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]