Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand how people live in such hostile places..having lived all my l…
rdc_d2xpxwe
G
After the robot made yeh remark at 5:00 he made a mistake, but I bet Sophia is w…
ytc_UgyCQaI0I…
G
Ah yes, me human brain need fix problem me so smarty when the AI has analyzed 10…
ytc_UgwALcedl…
G
I am asking you, is this information true or not. you know the answer, because y…
ytc_UgzM-jarb…
G
My mom diagnosed me with a rare condition in 2005 using Google… the issue is doc…
ytc_UgxP7Mfkf…
G
17:10 we already see what the usecase in social media is: Fake News and generati…
ytc_UgxCTQ_oX…
G
@MadDash84 Exactly, my issue is that if all we need is automation and efficiency…
ytr_Ugw1l0oQL…
G
Art is a skill, not a talent. AI art steals the art that people have been spendi…
ytc_UgwzhlAOp…
Comment
The funny thing is if AI ever reaches that level (and today nobody has any clue how to make it) then it's the end for OpenAI and every big company. Because in the end they operate in democracies and politicians might be willing to fuck over people a bit, but if 200 mln of them go out with pitchforks they will just nationalize all of those AI companies and start providing to people directly. Because if AI will ever get to that level, there won't be any advantage to private companies anymore. All the modern benefits that make capitalism a flawed, but still "it's the best we've got" system will dissapear. Those companies will no longer provide wealth and employment to the masses and the efficency argument will also no longer apply, as it will all be run by AI, so leaving CEOs and owners will be less efficent choice.
youtube
AI Moral Status
2025-10-18T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx8cboZ4Rt-u5B6ATN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdV1dojGnS4117nOJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwd0V-k-y2hC_U0xAN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDkuAoGQlCu6Cd7j94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw0mQ1j2oGjlndnHcl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwIcVPTB_1bU-KtAAB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8mAteXajmSsokhwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuIoJBDQ7EivC5t7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwZO6hkVUO1o_oi7uZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxwRPzLi7P12KoDRaZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]