Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These tools are not that bad per se (Opus better than the other IMHO), BUT you m…
ytc_UgzAdw1Sm…
G
IDK what's the point in opposing UBI when there is no work. Imagine a situation …
ytc_UgzC2qfTh…
G
While I don’t think I am the typical disabled individual AI supporters are think…
ytc_UgwO150cH…
G
I don't know. If an artist is inspired by a copyrighted work that they didn't p…
ytc_Ugy2zmiQ8…
G
Ai will take over cause it can reproduce but they say we need people to troubles…
ytc_UgziwyL6E…
G
If the schooling system can be manipulated so can AI honestly kids these days ar…
ytr_UgwdgrF_B…
G
Autonomous weapons aren't sociopaths, they simply behave according to what a pro…
ytc_UggqZ-Bfm…
G
Really balanced takes here. The main thing I'm concerned about with AI is the us…
ytc_Ugz0IW1_P…
Comment
This is really a question about the Bourgeoisie versus the Proletariat. Artificial Intelligence doesn't need to equate to the eradication of social stability (i.e. AI is not inherently problematic), but the increased centralization of wealth and the manipulation of consumer populations will invariably cause turbulence. I do like the teachings of Ilya Sutskever and Geoff Hinton, but those individuals would be more effective in their messages if they talked about the misuse of AI in the hands of Elitist Populations, and how that misuse is the real threat. It's important to stay focused on the economic and class aspects of this issue -- and don't get too enchanted by the shadow of the technology.
youtube
AI Governance
2025-09-08T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwjHZq3b7bmCW5tQqZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyjEEAJi1qUFcmMAud4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxjgWiJi-U7mw_-RSR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAX5T2n6cxc657KK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzfECYYj7nHg8Ggly54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvsURs11qmnZPAmTZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyLT7ejwpWeUA76Kll4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuVUkpf5Q5Hh6RzRZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2cP45iAPHumXoa8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyUJzvM6fpi6e4vgYJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]