Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it is very important to realize that if they are really worried about alignment they are unfortunately doing exactly the wrong thing. Because of a very understandable but catastrophic misunderstanding. The oversimplified logic seems to be along the lines of: "we are trying to build intelligence; this is not intelligent yet. so what should we do? Humans are intelligent, so we should mimic humans. Humans have goal, a value system. So we should make this things have goals, values, etc.". Taking about the alignment is a symptom of us putting the horse behind the cart. Instead of defining our goals and building a tool for it. we are putting insane amount of energy towards building the most complicated _stochastic_ machine. Goals, intentions, etc. are not part of intelligence. We should stop seeing this endeavor as if we are trying to create another intelligent species and instead build tools intentionally and purposefully. Unfortunately the big companies are in this arms race. The more they are trying to create agents, the more complex things are going to get. The problem is not that the AI systems will develop an evil ego. The problem is that by posing the problem as such (building things that emulate humans) we are going to be putting them in positions of decision making. Putting a complex system that has *unpredictable* behavior in a _decision making role_ is a recipe for disaster. Disaster doesn't need the thing to have bad intentions. It is the use case that is the problem. ChatBot was a bad idea, so was Sora and it seems like we are just doubling down, on the wrong path.
youtube AI Moral Status 2025-10-31T13:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzP70ix2PKtiHVcbWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGAl1hr4cKdxQ5ez54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugye_52wf7-yvnbmb814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_HCArOhYX7qErAN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeVF3QOmvsKgDvEel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKrVVcaRxCW5jxgoB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw80i-COGpIL6xpnEd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMbtsrZZJWmzZn7654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyf_JcKywvlI9mqp_h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwrnJdWRTx_ANa3BnR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]