Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI isn’t something we should “fear” for being too smart, because unlike humans it can’t just go somewhere, gather new data, make fresh analyses, or come up with hypotheses about things it has never seen. It can only work within the boundaries of whats on the web. But humans can decide tomorrow to do something completely out of the scope tomorrow and think beyond any existing reference, so the real issue isn’t AI outsmarting us but people misunderstanding what AI is actually capable of. Also take into account that datas are also regularly wiped out, for one reason or another, be it like transfer datas that can turn wrong, website being put down because it doesnt pay a hosting service or is obsolete, you name it, so if you consider that knowledge gets lost because we digitalise it without making a back up physically, we could end up with a propaganda tool because informations could stop being given to general public by those hosting datas, governement or lack of real traffic on the net that isnt ai... I do not call that intelligence when information is controlled or cannot think outside of the frame.
youtube AI Governance 2025-12-04T10:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxE3mNRmY_nmrkQmTN4AaABAg.AQJ3I1baLVwAQJCM0zCAtS","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugwdlb2VCG7dy8grIDN4AaABAg.AQJ39cAuiNMAQJCL7eK9hE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwdlb2VCG7dy8grIDN4AaABAg.AQJ39cAuiNMAQJF7ZGxeHX","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugwdlb2VCG7dy8grIDN4AaABAg.AQJ39cAuiNMAQJIrIsdRiE","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwdlb2VCG7dy8grIDN4AaABAg.AQJ39cAuiNMAQJbJKnKSX2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxN5GoO7bwfKsJxLxJ4AaABAg.AQJ2NF_5yNuAQJ5pq0-VrQ","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxN5GoO7bwfKsJxLxJ4AaABAg.AQJ2NF_5yNuAQJ9K1oG76q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwK8rWcVdrfE3q0f694AaABAg.AQJ2MivzVjdAQJ8hEvU1En","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxLXcCEnU_2CE2CR694AaABAg.AQJ2Ebb3-VqAQJDlMktl6b","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxfFZywJl355gR3crB4AaABAg.AQJ23WHD7s4AQJCtTS9jxh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]