Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The people crying to pause are just assclowns just making noises like chickens clucking. The reason I say that is that there is roughly the same probability that people actually pause AI development as we suddenly decide to pause all violence, which is effectively 0%. Also I fail to see how govs of the world can regulate AI without crossing some serious lines into people's personal liberty. How would they even enforce regulations on research and development? Are they going to be doing random door to door, computer to computer, etc searches? This problem is similar in a way to the problem of encryption. Not using it screws us in many ways and any regulation is likely to be suspect from a personal liberty pov. I think the real risks from this tech isn't so much that a bad actor can, and almost for sure will, use it to hurt people(people have and will continue hitting people with hammers too) but rather that we, "we" being everyone including the top researchers in the field, have almost no clue what is going on inside these huge models with billions of parameters. This naturally increases the surface area for "blackswan" type events which are events, usually negative but not necessarily, that are hard to predict and have major long lasting impacts. I think we'll see positive and negative examples of these in the future
youtube 2023-05-08T17:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugx8hkgo8y4kuhCHIZh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw6CWFgwEaTUq1YUzx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz_EG9BZGNZUosFfO54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxAURW7T-9TfVAYUo54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugznv6b7kwZ7tsFPsvx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOFRYAdrjpLduFgMN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyliMwKaK5cN9QeBqt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzijW4tbNOrbwYbHtV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyDc0OwINZ_KB0_Rg54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxgzm8gb1wo5tHzBQV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"})