Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@vshah1010 Even if the AI makes a few mistakes (and I'm sure it does), it is a very easy choice: 1. Let AI drive cars and have about 1% of the accidents we have right now and only a few deaths each year. 2. Let humans drive cars and have the same amount of accidents and about 40.000 deaths esch year. AI is not a perfect solution, but it is way better than the alternative. There is absolutly no reason to let humans drive a car on public roads.
youtube AI Harm Incident 2024-04-15T07:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzyoQHfkvKymBmesal4AaABAg.A-sXnd6yXDiA2Eam8enQJJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzyoQHfkvKymBmesal4AaABAg.A-sXnd6yXDiA2EbmVWX5QB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzyoQHfkvKymBmesal4AaABAg.A-sXnd6yXDiA2F4IIrYzpR","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxO_Ujk5rSvOjWRKBB4AaABAg.9qpyK0rFpPmA2EdMqzaPey","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytr_Ugxr6E9-mHJqZTtbMkB4AaABAg.9n3_k2CWmk5A2EdhXW4VlG","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwCHoJvEcws5sbtNMJ4AaABAg.9gAd7y-h4HM9usT1V03sv4","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugxhzhse5PJvVG9QIF54AaABAg.9eKDGjvyiy7A2Eet4FGYqd","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugw4jM93_9cAtGe9wgN4AaABAg.9Wp06dt0zPM9ckPtZde_5N","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyfZZIYhGkJqOVkj3p4AaABAg.9NJbV2HYicl9UBo94mRsqU","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugxx9whhaDWkfEJOjy14AaABAg.9GhF4K2osdN9KjCloO1jaE","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]