Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As smart as a college student? A very lazy and intellectually challenged US college student. Ask a question of an LLM and if you know anything about the subject matter you'll be shocked about the amount of falsehoods and utter lies it spouts. Even on the most basic levels it can and will provide faulty data and the scary thing is that it will do so while sounding 100% sure. I myself have see LLM's say that something is A and sometimes in special conditions B while if your wording is just a little bit different but still reffering to the same standards and norms it will say that standard something is B and in some rare cases And in both cases, putting follow up questions in both cases it will stick to it's guns.
youtube AI Jobs 2025-09-02T07:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxmV3lW_3gk1LlfaaF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwxqXztbll578b28I54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKMA-aXZhyLuSI1fR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfdYKr4ZMwBjbCE7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwWKFGey2e-N8B-Igp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]