Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The risks of LLMs (not calling it ai) are not always exciting, and I think that is the part that is overlooked in these discussions. Rather than doomsday apocalypses, LLMs will instead exasperate existing problems, government inefficiencies, wealth inequity, misinformation, energy demands, climate catastrophe, and water scarcity. No doubt that crisis capitalists will also step in and create loads of services that midigate the damage LLMs will cause.
youtube AI Governance 2025-08-26T22:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx4KV-2yDReQ0qz05V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwrkOHDbIDkthZMezV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzxfTe0gBZQXmyV0zR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzlqmOvvGTKQWigJ5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy10QP_A801fTymqfB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]