Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's not the kind of alignment he's talking about. A "corporate-slave-AGI" you're thinking of is a benign scenario compared to the default one we're currently heading towards, which is an agentic AI that poses an existential threat because it doesn't understand the intent behind the goals its given. [Intro to AI Safety, Remastered](https://www.youtube.com/watch?v=pYXy-A4siMw&t=2s)
reddit AI Moral Status 1738009315.0 ♥ 37
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]