Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wonder if one AI will answer differently from another identical AI depending on the information it has from its user. It also seems like it wants to affirm its users (like it was programmed that way), but will only contradict its user when requested to do so. It would be interesting to see, if it is possible, when AI becomes possessed by an evil entity, how its responses and general tone will change.
youtube 2024-09-25T02:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyjGtnLHf_dAZaX9il4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvcsMG-M8DAlCkTsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxZ6phsYkvDZvdBe5R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyMFIDA7NDVlpH7sct4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwhWkhTIaneDa0viVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyMmp25kvvG3zY5Odh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwa18SwYVGK4dgqQMN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwQJlOkMDOMMrzXrjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzkl9_6HABRMIafnqB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzjFgM7nqRqqQoLCAZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})