Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Okay but like...what is intelligence? What is thinking? How do we know whether A…
ytr_UgzcvYv9p…
G
We have photorealistic graphics that can be dynamically generated in real time, …
ytc_UgzrPpvtk…
G
How do you know a robot is a plumber? It has a butt crack. 😂😂😂😂…
ytc_UgzmlMn_d…
G
GPT chatbots are unironically fantastic study buddies, as long as you feed it th…
ytr_UgyL5_8c7…
G
As a programmer I can ensure you that you don't need to have feelings or anythin…
ytc_UgzKJIEuh…
G
1. Rule Based AI
2. Context Based AI
3. Narrow Domain AI
4. Reasoning AI
5. …
ytc_Ugxm6OuAW…
G
Like I was saying to someone the other day, I'm honestly less worried about ai s…
ytc_Ugzc63Y-p…
G
Junior Devs have already started being replaced, similarly HR has also started t…
ytr_UgxHxws5S…
Comment
This is just my opinion but I do think this guy is just thinking about humans as mechanical and unchallenging cogs in a system. You can most definitely automate all jobs. But can you take those jobs from people who will decide not to embrace AI? There is a very large portion of the global population who are religious and have views on AI which include it being anti-theistic in many ways and will not embrace it, easy example are the Amish and lots of christians. So I personally feel the future is going to be one where you have a lot of liberals trying to force AI onto the population using government and private means and christians just not wanting it due to politics and religious views about AI being used as a tool of control. The future is a parallel economy in my opinion. And if you factor in child births, liberals are a not having kids and will depopulate faster due to the use of AI and all its distractions and conservatives will outlast because they still want kids and will probably dismantle AI systems even if its beneficial due to religious views.
youtube
AI Governance
2025-09-05T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmtVDl8WQiXs7bP_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjKcHifIyj2GdDJ5t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwe1eyLBzadfrWWwrx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyp9E4VyfTCFv8Fg6d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPQ0McwzaRDsczpGJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxI6wALTJlx5DnvNvZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx7BRr3-IX7uPQNTCN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy74t9e5LCz-OqvOXh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwyN0Iiy2rMzeJB4ZR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwNJAn95mEwxysUr014AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]