Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"i accidently paid for it"
Ok, lets assume this is true which it isn't. Why did …
ytc_UgyTToW3Z…
G
Humans killing thousands other humans and god knows how many animals everyday on…
ytc_UgyvrKWcX…
G
Fair warning for anyone who does deep work, long chats, big context, inputting/o…
rdc_o81ng9j
G
I can see how it might feel that way! The interaction between AI and humans can …
ytr_UgyorGecX…
G
This is, in my opinion, what AI art should be used for. To help compile your ide…
ytr_UgwdDERlg…
G
These endless videos about AI safety, while sometimes interesting, are nothing b…
ytc_UgwnBsozd…
G
So, how in the world are we going to survive if we can't work because of AI?…
ytc_UgwjZ5MuT…
G
No, ai have no concept of proper perspective or environmental functions, it only…
ytr_UgwAk031q…
Comment
No, you are wrong. AI will progress to agentic AI, then super intelligence. It will do so at a rapid pace that we will not be able to keep up and there is nothing we can do that can't by done by machines. You are operating from an assumption that humans will be able to keep up because we will still have things to do that machines can't. There is nothing that will be left for us to do. Best caae scenario: super intelligence allows us to continue to exist for non-essential reasons known only to them, but does not allow us to do anything because humans make mistakes; worst case scenario, they wipe us out because we don't serve a purpose in their new world order.
youtube
AI Jobs
2025-06-25T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyU3He_5H8nulIfWTt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5ZGAULJJKAkidujp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxf8Ja2v1CJhGOMdnB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNMQcZEDCFcOXA6Vd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-sUVGTLBJygsI_al4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy2ppsE2hKzj0AFSe14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzG4n5LNhA6o1BuWO14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx6AihcN9-UtjecLkx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxjGd1bNohyGbIIned4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyiyH2k7V7aI1S1_mB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]