Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn’t the threat on its own. The danger lies in the first person who fuses wi…
ytc_UgxA_AOLk…
G
A New Tax Model for the Age of AI: HEW "Human Equivalent Work":
As AI agents in…
ytc_UgzQZNL4w…
G
Well ai basically steals people’s art into their ai “art” with real art you can …
ytr_Ugw03fi_X…
G
I guess the one good thing about ai as a musician is it's driven me to be more a…
ytc_UgwXi0vpo…
G
What I mostly hate is the AI slop/brain rot that the kids are getting and it’s a…
ytc_UgzJTpeC3…
G
This all honestly sounds like a skill issue. Im not trolling, its the literal su…
ytc_Ugy6_fG1H…
G
I think he read my mind cause I was thinking today "how is ai gonna take all of …
ytc_UgysU-BWC…
G
I feel like this particular situation is more of a "young artist gets manipulate…
ytc_UgyYW2S9a…
Comment
except that ChatGPT is really not "code" that it can modify, it is massive amounts of training data, from which parameters have been abstracted; in principle an AI and/or ChatGPT can "generate" more training data? but based on what?!? if it generates "crazy" training data, it can devolve into somethig "crazy" (less effective? more psychotic?)
youtube
AI Governance
2025-07-17T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAVQmk6XFhsstMcct4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwzhrIQdvG4aQPXLo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdI-s4W-nDy5EBgq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3GEW-k5qxZG89DqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyaZYZhviEPTkGkOZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8GBpKQZBExC2ZzMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzg_ti36sSWY0aoSHB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTP7-yVV9Cw9SZ7W94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBE55gJ6DZ0T5cMP94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxr5kx6rdYfOC2UrnZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]