Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why do these people just assume AI would want to just kill us all? If we can make them sentient enough to decide why would they just kill then? I also find people ignore the bigger outcome that such robotics would help us overall much like how we help machines by fixing and upgrading them as they could do for us.
youtube 2015-07-30T05:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgiDWlIwGGvZhngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UghKCqoGqX72UngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgiOnaN_XOzqf3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UghZX0I6wYeM7HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggvPjoPgEehangCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggYWv6ERmMUeXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugj3tGKiioJxx3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjF-HOxz99HwXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UghMz-C9oM6ooXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugh5t2NeiPdAQXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]