Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't trust AI to not make up information sources when summarizing complex top…
rdc_m6yp89j
G
So-called "AI" is just an LLM faking being intelligent. What is this guy talking…
ytc_UgyMwXehT…
G
If you have no job and come from college with 100k in debt. How do you get this …
ytr_Ugy5hsdMH…
G
it seems to me many people hear it, and accept it is not good, and are not able …
ytc_Ugz-SyrYB…
G
I’ve been following her for years. This video has depicted her differently than …
ytc_UgyH46iAk…
G
Once autonomous drones can be produced at scale, computing power and manufacturi…
ytc_UgwurHhPH…
G
your art as a beginner is still better than anything I can draw, lol. I've been …
ytc_Ugzrmlntk…
G
What's the point of being an artist when a random image generator does it
And it…
ytr_UgyY_xclQ…
Comment
I think the decision to develop AI despite it risks is actually rational in the lens of game theory, whether you are talking about individual companies or countries. The matrix is like this: if you don’t develop AI but your competitor does, they may destroy humanity, but they may not and now they have an advantage. If you also develop AI you still risk destroying humanity, but if you don’t you will have an advantage. It is a nash equilibrium (reference the prisoners dilemma). It may be the case that cooperating and agreeing to stop developing AI may be better for humanity overall, but with no incentive to cooperate both parties choose the option that makes them both worse off.
youtube
AI Governance
2025-10-02T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxaWkXloG_20dh-U6N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw0PlQ4ulaNSie6PTV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyaJD7oZKmDGsB1Kj54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoRObDLAT6XuFUWiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzm__NW_a-VWg5mfQN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]