Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI needs to be fully shut down. Humanity is nowhere near ready for this technolo…
ytc_UgykcPXSP…
G
i have chronic pain in my hands and feet but making art is still enjoyable becau…
ytc_Ugy3e_Vej…
G
Generative AI is not alive. It is merely software running on hardware. A softwar…
ytc_UgyPU4WwE…
G
There wont be any cars, or auto insuramce claims, or shopping carts, or grocery …
ytc_UgzbK3MWE…
G
Captain Sugoi Des
Is everyone's mind functioning in a negative way? It doesn't…
ytr_UgxlCog3j…
G
It's funny that even disabled people can make art themselves, while people who u…
ytc_UgwL5cnqG…
G
If I had this when I was younger I would have been the next mark zucker musk.…
ytc_UgwAL8ytF…
G
As an artist I agree with you. Most people don't understand how screwed my child…
ytc_UgylRkzi_…
Comment
While I originally thought watching this would be just extreme POVs, after it I feel like it was a great debate and I'm left with a sense that humanity will actually be able to 'tame the beast' of AI to become more like our companions rather than our destroyers. The superintelligence guy clearly has a point. We have no way to predict how such a technology would behave and it may well disregard us as we do ants, so therefore we should stop its development until we have better understanding and safeguards at a societal level. However, even if ASI does become real in the next decade, I believe it will also be real that ASI built specifically for regulation and governance by people equally as motivated to keep it in check as the profit-seeking tech bros will be able to buffer its own most lethal potential.
youtube
AI Governance
2026-03-22T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzxrtmujEu2J8CupXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJgJv785QtXzYiIhN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxXvfrHxVO0Z0De-394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMlcJuNDLp5KpRsxN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOwXkbhYNDBUIRnQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwGRwroF0bTqFXg3Wt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugys_ARzaXNOX-BiPz54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx3IpTd5oxoiO_6sRJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDnOzvdUpAOqGwHYl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymI7SI3-OqGPlJkvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]