Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After arguing with Gemini for over an hour when it misidentified a celebrity photo during a test session of mine and Gemini stubbornly insisting this celebrity was someone else no matter what evidence I presented it with, even conversations with the same LLM in a new window that correctly identified the celebrity, I can confidently say I’m not worried about AI replacing me anytime in the near future. Someday, definitely. Right now? It’s just too flawed. I’ve done many, many experiments of various kinds with AI in the past several months and its results are all over the place. It’s just not reliable right now. Even companies are seeing this and rolling back their AI-first policies because they see how poorly it can perform at times.
youtube 2025-05-23T12:1… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T18:03:32.401335
Raw LLM Response
[ {"id":"ytc_Ugx5cw_KbyX6UkSfdkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzACYCZdGrhgu7oRyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxpuOEYObGOkwcQCRZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwl-tSVPE0R0ssjaNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyDJWfbJGGr73UU8bV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]