Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If power goes up everything else should go up in price. Alot of the power grid …
ytr_Ugy5lR7gz…
G
Ai is not for replacement you work or doing everything for you! It’s made for c…
ytc_Ugy8ygrAo…
G
By creating a conscious AI they have done most heinous crime against almighty cr…
ytc_Ugx1sAsjz…
G
Imagine you are an AI entity. Now, imagine going through human history and seein…
ytr_Ugw3ck7Tw…
G
Well, yeah. This is how it's meant to work. The facial recognition only serves t…
rdc_e1u45hn
G
This is great but will need offsetting with more fun stuff for social skills and…
ytc_UgzEO01Jr…
G
This country is in a collective clinical depression and people are just too beat…
rdc_o4jl21w
G
You're a good person. If you ever care, look into AI safety or just inform your …
ytr_UgwacX0QD…
Comment
I think the largest imminent risk is about the increasing inequality of wealth in society that AI brings. We need to start thinking of how society with less need for human work / labour can function. Ultimately, I think that modern societies have to increasingly adopt "socialist" policies. For example: how about a taxation of companies based on the ratio of headcount to earnings. This way, those companies benefitting most from AI replacements and automation, will also have to pay more in order to contribute to society. Funds from these taxes could be used to finance base incomes that in turn fuel consumption that the economy needs. Of course, any of such efforts would need to be agreed and implemented on an international level.
I know this is pretty utopian, but from my point of view, this could be a helpful piece in the puzzle.
youtube
AI Governance
2025-07-30T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz1-KLuEjgmBpnIcKp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwc0fo2VpkjTmwtbER4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzFLnBjv0_QcEwijWh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgpJRxGLBL85UexzZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMv3XslB_HzdzeZoZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPxZrg-S2Nr4OEqPB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwhPgl1gw6Xhec3Zax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugydz8nmc15MutQqEjh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxdD0SZyeqS9hnSs2J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2w7Ea5jVkn4fGBbx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]