Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's kinda the issue. The assumption is that consciousness or being able to display high precision in its given tasks are necessary before Terminator scenario, but what AI safety research is actually concerned about are hallucinations and manipulation - things that AI are already showing signs of, and the scenario isn't "angry AI" or "robot war" either. Check out Robert Miles AI safety or Rational Animations for why this is a thing some pretty smart people are saying, and why it's a field of research at all, and don't just focus on all the stupid CEO money people who're saying this because they want regulations as anti-competitive market strategies. If it's basically really, really good at carrying out a task and setting its own sub-goals without understanding anything of what it's doing or if it's even conscious of itself, that's very bad.
youtube AI Moral Status 2025-10-30T19:1… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytr_UgwNnocC73XTt-CBXqZ4AaABAg.AOuvt8qpcsMAOwc5FiqaaY","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgyVGfRkyqRF4Y_sOqV4AaABAg.AOuvmS1kShPAOuwS0P3mos","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyjfAyNFS3U3EpfanF4AaABAg.AOuvhcN_aL5AOuyddQugjE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxZfrPepDhI-nffwUh4AaABAg.AOuvYTd4fpOAOuxaKjrLFS","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgycVBgeP0Y5ki-MYDp4AaABAg.AOuvI0SXrKmAOuzhr857ei","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgygTTORqN_u9t-6ASd4AaABAg.AOuv1gJ1pd7AOuxHCRQ-eS","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyXde3EvAxPhtQc8gh4AaABAg.AUnASDG29jcAVDwgzbppJH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx6RA1SKARVqpeZAaN4AaABAg.AQum78joCfgAR9W1akHNrR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugy89sKCfpInLZuDaNx4AaABAg.APU2F7BtBJxAR9_eCdWg41","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwD_8ldPliLhbZWB5h4AaABAg.AL4da-n0XM_ALYhALeFDg2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]