Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Depending on how OpenAI specifically have implemented it, a standard transformer model will take the same time to compute any prompt as they will add invisible padding to the prompt to make it the same size as the models input. (Again this will depend on how they’ve implemented it)
reddit AI Responsibility 1706989169.0 ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kos6v1a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_kop1esp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_korzrb5","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_koonyhd","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kosz9f4","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"})