Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What do you think happens to a system you feed with unknown data from all kinds of sources...? The amount of data you need to train an AI is so significant that the control of the data sets often falls far too short. There is already evidence of AI bias in the code.
youtube AI Moral Status 2025-06-04T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxyUFlx3vqzrBVcE-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPUIsljKakarmejzV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxseywyidAD9PcFdnd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzx8VkgQxgVBRFOEFZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxCAik_r6-j0uLzPF54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy5pNhT781DINcrxIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMh8xvyhS5kN8CkNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJ0K2uvjtjudpYxGp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzwqTj5ygv_u5nexQR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgylhIXEfFv9ZQ7-jXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]