Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For those who haven't seen the original video, this is robot vs human.. But thos…
ytc_UgxYPEHmt…
G
That's a thought-provoking question! The aim of creating robots like Sophia goes…
ytr_UgxsNKEMK…
G
Yea sure, IT will be automated by IT.. Idk where they base this off, but as a so…
ytc_UgyVJHQDN…
G
meanwhile, AI uses US-HUMANS! This is insane. I didn't know anything about AI …
ytc_Ugyr2rmMI…
G
AI is a primitive version of a human being fed enormous data sets of images (eye…
ytr_Ugz_dgum5…
G
With HVDC, you are looking at perhaps 3.5% losses per 1000km. And for reference…
rdc_ibdmz0a
G
Let’s just say it like it is: Companies just don’t want to hire and realize they…
ytc_UgyiRa7ti…
G
I think A.I will be like humans in the way they are all going to have different …
ytc_UgxLo8AGS…
Comment
Here is something else to consider. What will the US government do if Altman and others have overpromised on the capabilities of LLMs and they can't actually replace human work at a large scale? Stock markets are held up in large part by the tech sector, and if investors do not have a return on their investment in the next 4-5 years, instead of seeing tens of millions of people without work due to the use of LLMs, we might see a stock market crash and an economic downturn. I hope we won't consider these companies too big to fail if that's what happens. If these companies succeed, they will not share anything of their success, but they sure would like to saved if their bold promises do not bear out.
youtube
AI Jobs
2025-10-08T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwoU39CJirinMbS5ih4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyFnt182CTfTwwXa4x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw3EVlY5imA-VWIJv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8gAJhuaQvtPDIPTp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzKs8DjZ-j88Wd5ZYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbQJlFywPOfZrOhfh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzSUVRjVglGl8sA-MJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz71zXLHch_0a2ZrYF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzYF0S4IfFFMy3GZJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxflg6yk1mShI0Fstt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]