Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are we in the AI bubble?
Yes
Is AI gonna die?
No
Google might be of of the few…
ytc_UgxKaAKdx…
G
And then people say other people lie that ai is going to rule the world 💀…
ytc_UgynSU4U6…
G
Smarter in terms of being better informed. That too will depend on us, as to wha…
ytr_UgxI9sirk…
G
why is everyone concerned about the "intelligence" of the current "A.I." race wh…
ytc_UgyXuXnSp…
G
In photography, telling a model how to pose and what to do is in my opinion part…
ytc_UgwP22Md0…
G
HR announced “AI integration” and everyone pretended it wasn't a death sentence.…
ytc_UgzUc9K-J…
G
I’m 62 and have been listening in long enough about the coming AI wave.
I’ve wri…
ytc_UgwCtL3wv…
G
Plumber and electrician jobs are not safe. If everyone all of a sudden went into…
ytc_UgzBnhimb…
Comment
AI has been a mixture of hobby and work for me since the mid-1980s. Personally, I am not concerned about the Alignment Problem, per se. It is dangerous when used maliciously but accidental maliciousness is very unlikely and only in edge cases. It's important to look at what drives the AI. Conventional AI is driven by strict goals and/or objectives. A goal is like a point system where you want to do better (like capturing more valuable pieces in Chess). An objective is a discrete state to achieve (like checkmate). Any agent driven strictly by logic/mechanics (and this includes unmoving goals and objectives) will inevitably usurp the purpose for which it was build. However, this isn't merely a danger of killing humans but of all kinds of malfunction, meaning they will no be competent. Further, when you know the goals and objectives, you can use this knowledge to manipulate them to your own advantage. A more potentially dangerous AI would be one with free will, where the AI is driven by value-judgments. In this case, the AI can explore to derive options and evaluate those options against each other in terms of where preference = likelihood multiplied by efficacy. There are both positive and negative efficacies, like fullfilling a need for energy or something else. However, if the values that drive it are "aligned", such as the Golden Rule, then it would be more ethical than us humans. They could be our salvation from ourselves.
youtube
AI Moral Status
2025-06-10T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwUFJHwYTpO--F0ngN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfKj1CSrwldzUUy0R4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzD_5ALhuOCiyHV_fx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTe1Gf_X5fEI_zuzV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy5byaDWsgfnpvHzZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcCbzE_HgoJs844qZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxzTec53bKky-hoytd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxqn1EFyBQCK3vPH7x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyf29f7Mg6OQ4nwg1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyrx_8CD3C4sT50eqd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"})