Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like robots would actually agree with this. Imagine being a highly advanc…
ytr_Ugz_BwURE…
G
The reason why a less intelligent (like a baby) can control a super intelligent …
ytc_UgzhiTNUA…
G
Her MIT thesis on AI bias - Gender Shades - used data sets based only on parliam…
ytc_UgxLzrqHG…
G
I think AI is really powerful. But for practical applications a human needs to c…
ytc_UgzKrBct7…
G
I think palletier had AI write it.
Wait that means it's not real ..? right ¿…
ytc_Ugy-R3Dwa…
G
You might eventually become part robot as you get very old. We've already got th…
ytr_Ugzhdw0uY…
G
Where does the assumption that "we created AI"? AI, like mirrors, originates fro…
ytc_UgxKN0i8f…
G
The guy is afraid of AI Singularity, when AI becomes self aware and will try to …
ytr_Ugw3Z8-Q7…
Comment
You can't have extreme wealth for everyone. Everyone can't "get richer" all at once. All money is, all wealth is, is evidence of someone having a right to goods and services at the expense of someone else. Often that money is earned, yes, but there is no such thing as having something to yourself when other people also have it simultaneously. So no ai wont everybody richer certainly, best case everybody is the same level of rich. Worse case, fewer and fewer people have all the money and that's what will happen and is happening.
youtube
AI Moral Status
2025-12-27T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyjxFC0g02mvGtMeSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzj8juKd1jH3UhYQJF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBngOKmCKoNnxRjYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzJ4znt45xmc81j5IB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyA_4tZbFz5wfSXDkF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwb_rnWzPALLTMR7SJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztTPCczidkGFIxJ-d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdkQV7V2PFYc_eRad4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbcoGwQTaKgfnu3jd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUhRuO-Ck61IyGc-h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]