Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Clearly you have a dog in the fight. You must have a lot of instruments in AI. I…
ytc_UgyJ-HD1w…
G
Protest, disrupt, and disrobe this genocidal political-electoral system, and boy…
ytc_UgzU31d_X…
G
I love your channel! i used to suck at drawing but I practised for years. The "a…
ytc_UgzaUKyKJ…
G
surly if you give AI the rules of.. well everything, say rules of physics, the p…
ytc_UgyrnILZI…
G
This will happen to most of us soon no matter the job AI will takeover everythin…
ytc_Ugwvfa4d6…
G
You should use it! But first learn what it's doing. So, it's basically learn how…
ytr_UgxWD3uxR…
G
Cooking is a dangerous enterprise as there is a chance a human wielding sharp an…
ytc_UgyQT7fxK…
G
@avarmauk I like this discussion, but I have a question. How do you know I'm not…
ytr_UgyTiJIrw…
Comment
Fiction-wise, a great story that covers exactly what Musk is talking about, is the TV show for Terminator. It takes place after all but the latest movie, but the key here is that the super AI created sub-AI's so that all it's robots were autonomous. At that point, a small # of those actually felt bad for humans, and chose to support us - by building an equal AI to Skynet, but one that was focused 100% on human moral ideas rather than "win power".
youtube
AI Governance
2023-04-22T13:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqXrro1BuHgZclS5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyvc9qyXvpo4uuQoSd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm90Cz9ic2BuQANbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1bTssf0RY0H1PJEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxcsfOIvdygXEZXlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJrlVBFipR9PQPldV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6J6rNXfpU8X0dJpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPAxazlT4uef736iF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPhI6BLMLvjUyXZOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH_6XkcCtqaALxZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]