Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On one side a lot of work is very mechanical - and I see as a developer how mechanical work is removed with AI. On the other hand he makes horrible mistakes because it doesn't care about outputting a large amount of stuff (in this case code) that usually causes issues later, for example behavior that should be the same at two place he will write twice, and then if the behavior changes he will "remember" to change only in one place. On the other hand "passing exams" is really easy for it, because there is a lot of examples of tests with the correct answers - so "rewrites" are a trivial task even for a basic AI mechanism. I don't think this changes much about consciousness or intelligence - doing a job better doesn't make someone or something more intelligent it makes it better in that task, and AI is very good at performing extremely complex tasks without understanding why it's done or what it's good for
reddit AI Moral Status 1751016621.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n01b3hf","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n01f0l6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n063hmh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_n0nl4k4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mzxxz8b","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]