Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not just good coming with AI, lots of problems will come with it
Great vide…
ytc_UgwH5r7sn…
G
And the worst part is that for every question that he asks ChatGPT, one child di…
ytc_UgzFeyv9p…
G
You can't let kids fly drones anymore but being killed by ai is totally fine. Wh…
ytc_Ugzi0P24F…
G
At 5:01 the man Robot 🤖 said what are you talking about I think gargle ? Or whoe…
ytc_UgxArRWSM…
G
They can automate all they want, but I'm yet to see a robot buying food to eat.…
ytc_UgwDsD27Q…
G
Mr. Lemoine is very bright and his assessments of the AI issues are right on, de…
ytc_UgzD3LKo7…
G
Next you know they'll blame AI for hacking the internet, when they themselves ca…
ytc_Ugw_6ZM9-…
G
The threat of AI to the human workforce is just the tip of the iceberg! It’s bei…
ytc_UgxQwWFNg…
Comment
If history has taught us anything, it is that humanity seldom awakens to wisdom without first enduring catastrophe. Only after a global economic collapse or something far worse, do we finally grasp that we should have forged a unified framework for AI governance long before our own complacency sealed the outcome. As human beings, we cling to the absurd habit of learning only through our mistakes, but at what cost?
In the age of artificial intelligence, we may not be granted the luxury of rectification.
If you are under the age of thirty and reading this, you should be deeply concerned.
One thing for certain is that our current governments are ill equipped to comprehend what is needed.
youtube
AI Jobs
2025-10-08T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwjXFvVn_0OYr481yZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugys1TK_Ri_-SsgIh5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3_9eY1DbJbtlel-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJthb4kj2ztM1Eqy94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnhFukRU2P3fiRVxh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxB_GFtG8E5S3eujiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxl87w0UxG0VK4ygKN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1uVW2Yuc7hYGDymx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAWrvFC2f_8E2Zdtd4AaABAg","responsibility":"investor","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxjhDNQo4dXzkUB9lF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]