Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You present it as if AI came up with that stuff on its own when in reality it's trained on human behaviour. Even calling it AI is misleading, it is not intelligent. Imho besides the environmental issues, the biggest risk is that humans often don't question AI's faulty outputs and rely too much on it.
youtube AI Moral Status 2026-01-16T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwsibNcAOLO1oOS1Ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw49QSmXU-bBzk61kt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzfqh8EaswrR7LGSHF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzHY7InuwID3hh8Izp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDem71BKYtCqU2b4R4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy2xsheyZxo84Qqt5Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyrd4DcPbbUKS7oDs14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgydRcBZrEt5vHLLKm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyEJTGIdnwjibPHdTt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyTlKflV7b7X_klOsV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]