Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have been using the API to build products now for a couple of months. My latest project involves generating thousands of synthetic user profiles. That is, I'm trying to generate a collection of synthetic user profiles that I could use for testing idea acceptance across a wide range of types of customer. It produces middle of the road characters, that have a default positive optimistic bias and disposition unless I explicitly prompt against the biases to counteract the political correctness training. It does not want to produce a typical Republican right leaning user profile without special prompting. Now I'm not claiming that openAI did this on purpose, but I also am not ruling it out. The point is, we've been seeing these biases emerge in the dataset the deeper we work on things that touch the lives of a wide range of people-- the world is full of all types of people after all, and using AI to model them and their reactions to specific circumstances is a very important thing to me. It's tough to have to build in counter alignment into my prompting to compensate for it
reddit AI Responsibility 1682258438.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhe4t0l","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jhe6afc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jhfdwh4","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jhfl0jp","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_jhh110f","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]