Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When large scale companies like IBM and Microsoft ask for regulations, you can b…
ytc_UgzXLMSuk…
G
some true about it; right now, ai is controlling social media algorithms, alteri…
ytc_UgwgFZsTl…
G
ai chatbots are devastatingly bad for therapy. a good therapist will make you un…
ytc_Ugx7xq_WA…
G
Most of the other humans are trying to enslave the others in one way or another,…
ytc_UgxF7Iw1W…
G
Missing the point about how Ai replaces jobs at all levels - If the white collar…
ytc_UgxtdDIwm…
G
In the US here. Sometimes I wish there was a solar roof over the parking lots at…
rdc_eueqoif
G
Two things are certain in life, technology will advance ... and someones gonna f…
ytc_Ugwd1Wcxa…
G
How is a robot going to know if a man is wounded and down. Can you imagine the u…
ytc_UgwU3xCMP…
Comment
I worry that the P-Doom discussion gives generative AI technology a mystique that plays into the hands of the accelerationist tech bros steering the AI ship right now. Investors hear the talk and probably think, "if this tech is going to be powerful enough to destroy humanity, then it must be really profitable in the meantime, it must be something I will want to control, and what better way than funding it?"
The tragedy of the commons is foundational to how humans think, anytime you can privatize the gains and socialize the cost, we are almost hard-wired to take that deal. The thinking is that technology is inevitable, if humanity is screwed anyway then I might as well get my piece of the pie while I can. Everybody pays the price of a high P-Doom tech being developed, but the investors get to keep all the money while it puts everyone else out of work
youtube
AI Governance
2025-05-21T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz_aYY_K34HF9TW2fZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxAk8JYYtGq_xVWh0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_rxWQoiL_iPWdnXF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwdWLFgaVn9xYu2-i94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjGvTFJRj9zPz1SKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx_hkl9ApvYj6TLWdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP-SYtFkvUH9LeQtF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyWQwYIjDXbWRshBC14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyvAs7j0kesxY26Q54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwoGSxcV6Sb5gGI15V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]