Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
About 1:24:00 Melanie Mitchell says something like "The lawyer with AI couldn't outperform the other lawyer. Maybe AI will get better, but these assumptions are not obvious." The assumption that AI will get better isnt obvious? I don't think it's a huge stretch to think AI will probably get better. That's hardly wild speculation. I'm fairly optimistic, but this type if dismissal that AI could ever be a problem just seems naive. Of course there is hype and nonsense in the media, but there is also a lot of interesting work being done that shows incredible advancements in AI capability, and serious potential for harm because we dont entirely understand whats happening under the hood. The deception point was not just one person being deceived at one point, there has been multiple studies that show powerful LLMs outputting stuff contrary the their own internal reasoning because they predict it will be received better. There is a pattern of calculating one thing, but saying another especially when they have already committed to an answer. Maybe they are simply reflecting our own bias that is in the training data, our own propensity to lie when standing up for our beliefs. I dont know, but we cant just ignore it.
youtube AI Governance 2023-06-30T12:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwB3du30RGqEcCfiqR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxGaW9p18AEp5IotE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTGAyJNDRV4NCT8_l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgymYtKkvEojeCBNPM14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-I_5z2MH1F-xN_bt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwldkp-xVfE4OgJvBt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVWR6IKqbF38JBXhF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxkUDV7V45fai2Dgtx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCC1bMcxEIEk2suRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUXE2d9iCAiRPKfyN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]