Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is like a human who need explicit moral rules, but live as is because these r…
ytc_Ugwjhve6X…
G
I have a theory that you can tell a lot about a person by how they treat their A…
ytc_Ugwbt8WrN…
G
Oh and the cameras will not detect an emergency vehicle because it gets its trai…
ytc_Ugw_2mObq…
G
Idiot.They will destroy humanity as we know it.Little boys playing within gods d…
ytc_UgwXHhY2j…
G
Whether AI is conscious doesn't matter compared to it's capabilities and role in…
ytc_UgwZJ2HVf…
G
its funny how I never see "cops" mentioned.....which is one job that will never …
ytr_UgyYtde10…
G
Yes this is why I just don’t understand the pro ai shills saying this is totally…
ytr_Ugwt2aAWD…
G
🎯 Key Takeaways for quick navigation:
00:00 🌍 Introduction to the history of hu…
ytc_Ugx-LTiM6…
Comment
AI is ultimately about business and consumers. But if all jobs were automated, mass unemployment could follow. Governments would be forced to support large populations without work, leading to economic collapse. In such a scenario, consumers wouldn’t have money to spend, which means AI couldn’t sell its products or services — causing automation itself to fail. Isn’t this a likely outcome?
youtube
AI Governance
2025-09-05T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxw-aUd56k_wgGC2Q14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHbe-WH0i1Q3muShp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxfWp1qV6rW2mZ7ECN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAJEkUH_Mz5phSkMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZ9txXSy8p0FfoYsV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw5SsqnRhiaIxu1Fhp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3vlZnJLx7t94m6A54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwV0w31QhHH1RcTS4N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxdCcEYFVODcPodKk14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCGyU3TsavUoFj3fd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]