Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would be very skeptical of any regulation of ai. Those developing their own systems at home civil rights are opened up to violations, who are just looking for a startup. This was the idea to startup, to evaluate data misunderstood by manipulation and malpractices, based off psychology and law so individuals couldn’t be lied to and cheated.
youtube AI Governance 2024-11-09T20:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz01_5_Hb6yn5xKq5N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwurJCdaaT_5UaW314AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzBHMx8_t6BkMBj8Ox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzrhsbrOvBqAxJLej54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"concern"}, {"id":"ytc_Ugw2ZTplbMprtYsaOol4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyd1JzU5FReyNYZfL54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbueR2JaacTRJvoCJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy7Q9frg44rugvZLZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwG8o6HESvDVl6Z8lJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz0j3RPi6242EsIZhx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]