Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Back when Google began developing self-driving cars, they said that the development is the easy part but that once the technology rolls out and the first people die in crashes that were not magically prevented by ai replacing the human component in only some vehicles, that would be when the actual battle begins and it gets really expensive to defend the technology against those who believe that a hundred deaths caused by self driving cars are much worse than ten thousand deaths caused by only human failure. Now, suddenly it starts to make sense why they made Elon so extremely rich. I guess that way he can survive the time until society accepts that even if not 100% failproof, self driving cars are actually a lot safer on average. Fascinating!
youtube AI Harm Incident 2025-10-19T16:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwV5UQXPw2H4R9Uzqt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKR2UGzjgsCR85Oal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnQ3HCy1z-6qClJq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXgHJvTtHvLLYiIb54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-hsjLkvAau0I8DlZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugysg6rwsyzaIoJhnu14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxE0nyxr64eRE3ZhQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwloWy2D0uUw94JxyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQhUzoOQhvb6E4Uv14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx1jTYI9x8D9xXm5Ml4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]