Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Modern chat bots based on LLM:s don't reason or "know" anything. They don't apply strict rules. They do probabilistic inference to guess the next token. In many cases the result is the same and when it's not we call that a hallucination. We should in fact consider all LLM output as hallucinations that frequently happen to agree with reality. If you ask an LLM to play chess it will suddenly try to make an illegal move because it does not know the rules of the game and do not verify that the moves it makes comply with the rules.
youtube AI Responsibility 2025-10-09T15:1… ♥ 41
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw6nia84y7t65s1IK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwetN_J-bOhbvQ4AZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2gU0N5Dw-auyxiqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYXf3CX96H63RqqLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzuSj4A4OdQyojcwhF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJQOxZEcS9YCSRwXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyiP4eJ62vMkZ8jwOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwAXm2Ng6LNYsjTgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxGrYIX5b02__DNTK14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6WtzleY_3Ez_qvY54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]