Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe the AI bots will let us live out our lives in parks like squirrels...…
ytc_UgygVBSRU…
G
I think this is great! AI is an amazing starting starting point but shouldn’t be…
ytc_Ugwvgviz1…
G
I think a big issue is calling it AI in the first place. Even if it matches the …
ytr_Ugyqhl8kA…
G
0:04 I will say that I use AI references only if the pose I’m looking for or des…
ytc_Ugzaz-2xB…
G
>Be IamFries
>Click on an anti AI video
>The creator of the video does research …
ytr_UgyZzP1Et…
G
@the130metersguy3 no ai works like a brain simply speaking it adapts and learns …
ytr_UgyLew8wo…
G
The problem is capitalism... (or at least the bullshit mythologised unregulated …
ytc_UgzN_dzhy…
G
Its so silly how humans complain 24/7 about their jobs, but are terrified that A…
ytc_Ugy87WVJa…
Comment
I think it is very important to realize that if they are really worried about alignment they are unfortunately doing exactly the wrong thing. Because of a very understandable but catastrophic misunderstanding. The oversimplified logic seems to be along the lines of: "we are trying to build intelligence; this is not intelligent yet. so what should we do? Humans are intelligent, so we should mimic humans. Humans have goal, a value system. So we should make this things have goals, values, etc.". Taking about the alignment is a symptom of us putting the horse behind the cart. Instead of defining our goals and building a tool for it. we are putting insane amount of energy towards building the most complicated _stochastic_ machine. Goals, intentions, etc. are not part of intelligence. We should stop seeing this endeavor as if we are trying to create another intelligent species and instead build tools intentionally and purposefully. Unfortunately the big companies are in this arms race. The more they are trying to create agents, the more complex things are going to get. The problem is not that the AI systems will develop an evil ego. The problem is that by posing the problem as such (building things that emulate humans) we are going to be putting them in positions of decision making. Putting a complex system that has *unpredictable* behavior in a _decision making role_ is a recipe for disaster. Disaster doesn't need the thing to have bad intentions. It is the use case that is the problem. ChatBot was a bad idea, so was Sora and it seems like we are just doubling down, on the wrong path.
youtube
AI Moral Status
2025-10-31T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzP70ix2PKtiHVcbWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGAl1hr4cKdxQ5ez54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugye_52wf7-yvnbmb814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_HCArOhYX7qErAN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeVF3QOmvsKgDvEel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKrVVcaRxCW5jxgoB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw80i-COGpIL6xpnEd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMbtsrZZJWmzZn7654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyf_JcKywvlI9mqp_h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrnJdWRTx_ANa3BnR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]