Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it worth learning to code in 2025? No. Is it worth it to create videos promot…
ytc_Ugx0rNiw5…
G
Not even the Adeptus Mechanicus would come to the conclusion that the A.I. was b…
ytc_Ugy1rse_q…
G
@OPP045-ty2zs In the early days of Photoshop and Illustrator, traditional artist…
ytr_Ugw0xbrJb…
G
Try understand virtual machine design where limited physical manufacturing can m…
ytc_UgzKTxyDo…
G
Basics.
We keep feeding something we are not sure of, possibly afraid of,and the…
ytc_UgzuXImXV…
G
why are all these AI avatars, CIRCLES, Jen Lopez film? ai art programs, etc. …
ytc_UgyeP17gu…
G
@GlovemeisterYTdidn’t say it’s a good thing, everyone collectively trying to pre…
ytr_UgzQEaEJP…
G
Do not use AI in any context and call out every single person you know who uses …
ytc_UgzscHGwG…
Comment
The first thing is that they should not make a single robot in humanoid form, they should not look like us, we should never confuse them as human, they are not us. The biggest issue is job loss and what humans will do for money and how we will function having no purpose. The companies developing AI will become quadrillionaires, and many of those who are developing AGI have no interest in saving humanity. Why would they want to give us universal income so we can survive? They have exactly zero motivation to take care of us, it does not benefit them, in fact ultimately they may view us as the insects using resources they need for themselves. If they cared about the survival of the human race they would have chosen to do this only when they put every safeguard in place first. We can all thank Sam Altman for safety being completely abandoned. They are far more interested in their own salvation so they can witness what is to come, we are just something getting in their way!
youtube
AI Governance
2025-12-08T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxU_zhG_Jo59YxLJRJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhyhMkmGf8kCJK1RB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxM728SphNwsfrOr-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyy0ZIV6sTro2cEUf54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkrnOJh5y8fnIp-th4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8jUXR8BLjZlxC_a14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw6mt-dDKEoqBj4pOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugys_yRWukI_tTyMULB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpEHZYFZqdlwhOxbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyq0OHXDF5CIU60I994AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]