Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Cringe. Nate Soares thinks there's an actual mind wanting things inside LLMs. But LLMs can't model minds or the actual world. It can only model how words tend to appear on a page. Words on a page have no meaning. You need encoders and decoders modeling the mind of the other before meaning happens.
youtube AI Moral Status 2025-10-31T08:4… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxDnIoEo8ZbXyr5gjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytW-GFXJSUN9uBFVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxynuJxFS_FbMo81g54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw_ATcFAHKUQNw50DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw1mrYMBx73tZPRaQ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx50-ofaOvFNrJrstR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyiYJcYJxaAlKvJpO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzU1rD9uZP-NvKdFZN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5kBX-Eb-8WMNfhZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyityX5J7uPtWaGxG54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]