Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some prompts simply don't work on image generators, because some data it's never been fed, for example "A wine glass filled to the brim" was impossible because it litterally had no data for a full wine glass. That might have been manually patched out when it was pounted out, like for LLM's being asked to count the R's in Strawberry, openAI and others manually add a rule to adjust the answer for that question. They also do the same for AI benchmarks, manually tuning it to fake higher scores
youtube Viral AI Reaction 2025-08-19T07:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw0GDs4mhm9ibUgsNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCFdp_O5kqg9wHX8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSYCAMYHbWuHNFLm14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxksRzgTzj4KA98I3R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6xUqg_41OPoZRb_t4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBRBljToZttKKdc954AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxD-KKDKp6gqb67NG14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxUY3Jppszh6tdfFpV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyrmtsvww8keyz1iBp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYQVjZGU42b-yNX_l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]