Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate AI, but honestly why would this not be fair use? Obviously copying verbat…
ytc_UgwQplaR4…
G
@Cosmopavone it can not do it because of the fundamental method of chatgpt does …
ytr_UgxVMiV8O…
G
The West needs to cut India the fuck off and make them deal with their own probl…
rdc_luchlwq
G
I don't usually comment on things, but I felt the need to say that you are an ar…
ytc_UgzM16Sox…
G
One of my professors said his use for ai that he allows is “if you would ask you…
ytc_UgycBWxqF…
G
AI companies should stay the fuck away from the arts. Music, movies, photography…
ytc_UgyBZN_nf…
G
@LucaBl The differentiation I would make, is that ChatGPT specifically has prett…
ytr_UgxzIPHbI…
G
Don't expect the system to mend to someone who isn't controlling it. As we train…
ytc_UgzlUPKJs…
Comment
A major problem with AI is that it only knows what you feed it, and it doesn't know what should be forgotten since human information and knowledge is always changing and evolving. AI models are supposed to be tested and validated, but it is impossible for humans to do so, and if you fully automate the feeding and validation, it can result in things like the AI generating porn leading to lawsuits. AI used to generate code is already creating erroneous code that requires human debugging that AI was supposed to prevent.
youtube
AI Governance
2026-03-17T00:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz6oVjkT_TFWwVtBtF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwlAL5sJg7gQmdJD094AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxpMzx8eHluo6OHKkR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBAqdmgU5U2tKnpv54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwmnfS91L4wpxLIqAp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzE82QZi5h3qs9PiBF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxWK165n9xHPBmPae94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVO-MhDhTcTfE6Hwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmQaHh20EiijWzt4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGhEObmGp-nTli_E94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]