Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love your art, your use of lines are really amazing and I love your choice of …
ytc_UgxBgHkv_…
G
Interesting. If you go to the Bing chatbot and ask about the conversation where…
ytc_UgxaDIhRk…
G
Being anti semitic is normal state of a truth seeking individual, only natural a…
ytc_UgzPgS2Ta…
G
It's like they should make a logical law that requires AI videos or pictures to …
ytc_UgzS9aIas…
G
Yay let’s see what happens to capitalism when people are no longer needed. Or sh…
rdc_j6fgxr4
G
😂 okay so maybe this is just a me thing it's not far less predictable at all for…
ytc_UgxoxWZKo…
G
Why can't I pay an artist to use the art tools, because I'm still shit at art de…
ytc_UgxliA3oI…
G
One big problem, that I see is that we always ask the AI to do things for us! Ne…
ytc_Ugz0Q0ofa…
Comment
@DoktorIcksTV I think one of the laws was something about "may not injure a human being or, through inaction, allow a human being to come to harm". It is easy to see that this methodology could result in a logical issue where the robot would start to see everything that a human might do could be potentially harmful to human's so it started to prevent humans from basically doing anything. This is why humans must remain their own governing system, and maintain the right to choose what level of risk they want to impose on themselves in life. It also explains why the level of "acceptable" risk has evolved as we grow more mature as a species.
youtube
AI Governance
2023-03-30T08:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9nsY-cBEF5A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9nsbW4K7MBw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrn4m0trjv9nruHw0PaSq","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9nrle7iFSFV","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9ns6RZtoKDO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9nt0NShWS_s","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9ntJ7Qv2sCu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9ntlXtisU_-","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugw6EtfFGqbU3EKNXFx4AaABAg.8ebBLFhnP-u9TQaU28JdPc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxyULC5OslX0G74cJx4AaABAg.8eZkIXf7xt38e_xmX9IADA","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]