Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea that alignment is the answer is ridiculous because it implies a prescriptive normative nature to society. Alignment says “I know what’s right for everyone so I’ll tell the AI to do that.” We are terrible at predicting the future and 80% of humans are so consumed by their emotion and status that they constantly make terrible choices. They virtue signal they are good people while being too lazy to do the real work to figure out what they should do about problems in the world. As long as they feel they are right they are and there is no winning with that. There is 20% of the population (I feel) that are thinking critically and trying to make the transition here but it’s not apparent what the right answer is. The people who are lazy and want to feel good will always try to drag everyone in to hell with them because they know the answer. “Oh AIs bad so I won’t participate in training it.” Or “AI weapons shouldn’t be used because it’s scary.” Mean while our enemies a full throttle making these weapons and their civilians aren’t allowed to say no. If our boot isn’t on the neck of the people who want to kill us we will die. That’s how it works and your white collar job that you over paid on a degree for is on its way out. Some white collar jobs will be safe because the employees will figure out how to continually adapt with new AI models and tools making themselves the asset. But most people will just go out kicking and screaming before they have to become a plumber.
youtube AI Jobs 2026-02-26T19:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzgwFa4NqrUqu3u5OR4AaABAg.AThNDpID2tOATiyZY1Sx_J","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw4_wBUJRMv6Bt8aI54AaABAg.AThNAGxE45JATksy4-_UgX","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwP7Bh9IMgB6iON91R4AaABAg.AThMBdtiT19AThOYq4DY3l","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzvjkfqXE7gAIsVSfJ4AaABAg.AThI_4vZpDpAThNSlUKLyC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzvjkfqXE7gAIsVSfJ4AaABAg.AThI_4vZpDpAThSkyJmOwz","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugy05z1jgVJWuN5OED94AaABAg.AThHGVQupASAThT4Nl6AH8","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzfmVU35CL6jXKcbVV4AaABAg.AThH9K-65dbATj0kqmpCLJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzfmVU35CL6jXKcbVV4AaABAg.AThH9K-65dbATjRtQBBrPB","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxDsRCHul4tIiyJBy94AaABAg.AThH1oiQ3FtAUybIsBWVgJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxEPRqLZhuR4MdwBY94AaABAg.AThGu8H-FVNAThO_CAgr_M","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"} ]