Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not sure about motives, other than the obvious (technically sweet, lucrative, exciting, empowering), but the AI accelerationists are a consistently dishonest group. Pro-regulation is not a self-interested position as far as I can tell, giving that position a higher a priori presumption of honesty. At the more granular level, argument by argument, position by position, the regulators have a much better case. There are an unlimited number of ways that introducing powerful alien intelligences into our civilization will by dangerous, nor can we confidently ascribe some limit to just how dangerous they may be, nor do we know the threshold at which various levels of threat will arise. The philosophers invented a term to describe in part our epistemological position with respect to emergent AI: anosognosia, the inability to know what we do not know, or, in common parlance, the unknown unknowns.
youtube AI Governance 2023-06-27T13:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwUti3nKWArqPeZ-Ut4AaABAg.9rSWSm0Wp_o9rU7yy3LPgi","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rTMs4P91UI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rVp-Q853sI","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugwzbk-4P9eZqRv4nad4AaABAg.9rRUHxiVrrD9rUHoe6rc-j","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTEdR_qqHb","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTmYJdHCFt","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxIXzNQGwU6g--gsSB4AaABAg.9rRAa9OypQh9rVO2L4QnOz","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugz-SmocC08gAzk5kgp4AaABAg.9rR0f6HCIII9rTuvcLaWpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rSkNCQncq3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rTCTQ3Th9H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]