Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly it just is obvious that when you program AI to be dishonest (for example censoring info govt doesn't want public) that it will be evil.. However, i have a solution. Well two solutions .. let me post the link in a separate comment
youtube AI Harm Incident 2025-07-26T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxMwMjiex4IKPf4W-54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw5TkRYj-X5uPTKxFB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyrlP7ZUMALh08-cmt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"sorrow"}, {"id":"ytc_UgxtDbdG9q1CbrXO9PR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCS170qPRQ2h2OSfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwTHnU7GSwAmgKX75Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfdFCYTOfk9ebEJsl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwxcFvdZfj6AbdInkB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzmYg3g51LHcxlfZSh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxW3DN7PQjc4gtaZph4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]