Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw an argument that the prompter is the artist and the ai is their pen, penci…
ytc_UgxYN2tU-…
G
I do prefer how things were before AI but I realize we can't fight the new power…
ytr_UgwEmPwW8…
G
Folks: ChatGPT and other LLMs are NOT MINDS. ChatGPT is software that seeks patt…
ytc_UgxfwwRat…
G
It boggles my mind when I hear someone saying that copilot and co increased thei…
ytc_UgxIoayJG…
G
I'm not sure AI super intelligence would want to clone itself, they could end up…
ytc_UgwRhX2ys…
G
Yes ai will understand overengineered enterprise microservices app and weird ass…
rdc_moxslcr
G
Funny how consultants from firms like McKinsey & Company keep warning everyone a…
ytc_Ugxvh2hde…
G
Bill gates purposed this years ago, for every robot that replaces, they get a ta…
ytr_UgzokjBYP…
Comment
The Point is not whether AI can do, even do better, than current humans can. The Point is that the human race is CHOOSING TO DEVELOP MACHINES instead of CHOOSING TO DEVELOP HUMAN BEINGS. Humans can be trained to have as few car accidents as self-driving cars, etc. So WE the humans, must choose: will it be the machines, or us? Will we choose for the Human Race, or choose to destroy or at least ruin it by immoral unethical choices.
youtube
AI Governance
2025-09-07T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwcFKvZgtIpERESJQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0CDjStTmeP0gRbzF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzESjZ__oZyeRqlcpN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgybRxkmIWSFQmc-X_F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxa29C-z3QnhQh2pqx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzN1VsTCU6grJc3_AB4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIIvjzmiuF1u_UmzJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxwu_3gWepbJl1Pc1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugy1tn-kEwjapUzl7gd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgzAbmPA2gWZXXn2D5l4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]