Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:30 I think the issue is they're making an ai remember far too many topics to be able to go in depth and explain the nuances of any single one with real accuracy, because it doesn't have the understanding in its model, only the information being garbled and crossreferenced to similar question-response pairs.
youtube AI Moral Status 2025-10-31T08:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxDnIoEo8ZbXyr5gjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytW-GFXJSUN9uBFVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxynuJxFS_FbMo81g54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw_ATcFAHKUQNw50DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw1mrYMBx73tZPRaQ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx50-ofaOvFNrJrstR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyiYJcYJxaAlKvJpO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzU1rD9uZP-NvKdFZN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5kBX-Eb-8WMNfhZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyityX5J7uPtWaGxG54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]