Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It might take 50 years or 1000 years for Ai to become self aware but it could ha…
ytc_Ugxsb7sLH…
G
This guys a fool compare to both robot. Don't he knows they have all the collec…
ytc_Ugy3av42g…
G
the moment the AI learns from itself and provides more accurate information and …
ytc_Ugw__qgsH…
G
If AI can write code, GPT 4 can, and it becomes aware by itself or worse, by som…
ytc_UgwrEtG_D…
G
Here's a reason not to be nicer to AI - it doesn't care, and it normalizes AI us…
ytc_UgxeO_2Iq…
G
If he supports driverless cars and uses the sevice then f him and his complaints…
ytc_Ugw8G5CJV…
G
Robot: You fake robot, intruder want my job
Human: Wait, wait, calm down
Robot:…
ytc_UgxIBBNVQ…
G
A materialist guest with a lot of blind spots warning us and bringing us towards…
ytc_Ugy7ITh_V…
Comment
There are so many things that are not automated and have many hurdles in life. Some AI doomers act like our only problem until now is productivity. Forigen cheaper labor could have always affected the economy just as much if not more than AI, but laws were implemented to avoid that. Until now, AI is still far from human capacity. Almost all major AI companies are not able to make profit of AI. Running massive LLMs on scale will need even more power and costs and it is still not at a human level even then.
I understand the if a true AGI that is smarter or as smart as humans comes, it is a different story, but until now, it is just a hypothetical.
Also, Robots and AI are not the same.
youtube
Viral AI Reaction
2025-11-25T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgfdyAQrhf9XOW6Q14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFYT5Y_39etcNo_yN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4ElLakMf2FskNhmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx9uR9jOpsCGOkzXC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzeK4hThTbhBqIWYvJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyeqyylGQlkgT1NOZh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCsDYVI1L9l6LScA94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxrRewfN1WhnUGche14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-p3N9hlXfvCnqGqV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5oRhdbV1cw1hHdv94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]