Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can't listen to the whole thing, mainly because it seems - so please correct me if I am wrong here - this talk does not distinguish enough, and did not made the difference clear, between generative algorithms, these companies call AI, and actual AGI. Actual artificial intelligence. From my understanding, generative algos cannot ever be actually intelligent. That is not what the formula they are based on can do. It's prediction. They may seem intelligent, but are basically a branched out answer system of a game. And because they also usually try to predict the most likely next letter/word/pixel/number, that they are and always will be wrong about things. Because not all things will be likely, and knowing and understanding that is also intelligence. They need to understand what they are doing. And they don't. They can't. They just seem they do. And because we humans put these into weapon systems already, do we really want a machine that does things we don't understand? Or has everyone forget Helsing already? And secondly, let's assume we actually achieve AGI, why? Hasn't TNG taught us that it would either be: A race of intelligent machines we have to cohabit with, or basically a new race we humans keep enslaved for our benefit?
youtube AI Moral Status 2025-11-02T10:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgynFWz-RLkRLTjf9a14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxJ0_RUc38ucjPngSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZwjCz5HC-sVA5Nt54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznrcprNynvWSGRpx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3hxe2_aS-64gsdPl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyndY4BFkUDnD9zSCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMAB12w2p_bUBnpER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWsZERstQUVHz0FYV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyor9ZHL-il_uW3zVx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxlHxujpzxMQS9lMaB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]