Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Before this gets implemented the AI needs to be made to be 100% truthful. How would you prevent wrong information from being given to the teacher as fact and incorporated into the lesson? There is also the issue of opinions not being formed by humans but whatever the AI says gets accepted as true.
youtube 2023-05-05T13:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwrgRiz4kZWniyrl2V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzoXdfU1Y9vXAx39qN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxsyFzWN1uIRmjwWg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjR6L4jHpRAnn1JYB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzS3i6CdjQMl_2718J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwPdxU2hMnPuqnsi6p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgybOdsb4zNGqpa2Zmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcaUlYFDw0v70HqJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4wBjijF6bqHOplfh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4GP_1cKWSA5vOXBF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]