Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even a human can make mistake it doesn't mean that an Artificial Intelligence developed by A human doesn't make any mistake it may be accurate but doesn't mean 99% accuracy beats and make it correctly work and 1% mistake matters that turns to flip the result to unwanted outcome, i think it will always better to be we command them and they follow
youtube 2025-02-02T13:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxehzWARbRPzPfYJzp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzCHq2NwKGqtbncJBl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzceEIWEw5vmTUFYmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw5NZo6FE4cFoWQ7aJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwClC8gyY3wXJBLPbh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrvUrA1YCHG7cm5lJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgygsxcbE62_mU0cm114AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxlXDJY0iWJs2PLWpx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzhFhe5JRr2HgZ1BQd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzTOj7lEGTIRskiwip4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]