Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI aren't Intelligent for now. They just execute what we order them to do. But I'll recommend you to do some research about the Paperclip Theory Let's say you tell an AI to make paperclips, it will buy metal, turn them into paperclips, make profit and then start again. Success would make the creators of this AI manage more things, upgrade it. But their mission is to make paperclips while being the most efficient no ? So why couldn't they also use the metal of the buildings, the metal that humans have inside their bodies, use the metal stored into the earth, after all, this AI's mission is just to make paperclips, there's always a way the AI can go around restrictions to reach it's goal. AI doesn't hate or love us, they do what we tell them to do, no matter how it's done. I hope I explained it well, please do some researches on the subject, Wikipedia is far better to explain than me 😅
youtube AI Governance 2024-11-11T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugyq9Qzyj1sUeAIYsBN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxAFSVinb97o7r4TmF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgymunjQEqXTWAaELr94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxNzcVgQNL_-tn4fKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwiBETzcN_ZGnsmwOh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwREs1EBBFLKt6VdKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmEQLWg9c-qwLYef14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwqJLhRFcJ0qX0Ex5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybY24isA9BeaSbHrN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzykIdR-VPX7ZmkVX54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]