Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People can’t even accurately program software how they going to program a robot …
ytc_Ugw7qS2lB…
G
I DO NOT CONSENT TO AI OR THE ORACLE USING MY DATA. MY PRIVACY IS MY PRIVACY.…
ytc_UgzJiVPTD…
G
@itcouldbelupus2842 is it lazy to not want to do 1000 hours unenjoyable work for…
ytr_UgwMvECGA…
G
Do you mean startups who are selling AI or adopting it for a particular business…
rdc_n9h5v15
G
For me, I've always seen, even before it was even a proof of concept, that AI is…
ytc_UgxDtLWan…
G
Thank you so much, sir. This is a kind of message that someone like me needs. No…
ytc_Ugxo299dp…
G
this feels less like a social media fight and more like a governance collision. …
rdc_nydi8ah
G
I understand your perspective! While it's true that AI like Sophia can mimic hum…
ytr_Ugy3f_hLA…
Comment
There is a huge flaw in the logic of this video though. It's comparing the possible coming AI spike with trends that took decades to be where they are today (existing automation and outsourcing), and looking at trends in investments making people more money over the past decade, and assuming that it will be a similar situation with AI automating everything.
30%-40% of the existing job market could potentially disappear within the next 18 months, literally nothing we have in place would mitigate that impact, and in the current geopolitical climate it's very unlikely that we will get there either here in the US or elsewhere. If money itself becomes worthless, which is likely in that scenario, then no amount of investing will help.
Now, this might not happen. We could hit a roadblock in AI development from resources (chip shortages or energy production limitations, for example) or coding itself (LLMs winding up being an actual dead end), but barring that we are in for an unprecedented paradigm shift that none of these predictions will be able to map.
youtube
AI Harm Incident
2024-07-29T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxm7ENojjvkF12DvLB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZMhLpDCn4D5jSUZt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHPTbCD7wyErFM7JN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUKSeAiYp0lRJ5jqh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKlPjNG7aKu82glWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzDtVKFLRUw7SZ4DbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2rbKNDHKC-hUyz_t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7_X9JdCJW5fuRos94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgytNdnkuR-ZnmMgq9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxU9dPXuKcGOp-LgB14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]