Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@SarahNGeti wow you really are naive aren’t you? Free will? Self purpose? Go study machine learning and AI then come back to me and have an intelligent conversation. artificial intelligence is only as intelligent as the data that is fed into it. It can’t have free will. Free will doesn’t exist. Not even for humans. It isn’t possible for machines to take over as they cannot and will never have independent thought. They cannot do anything they are not programmed to do. Elon Musk isn’t talking about some terminator scenario. AI is dangerous in that it it can possibly replace most jobs, except jobs like mine, although AI can help make my job easier. But super powers like Russia and America can use AI to develop more powerful weapons and drones and machines that would be a real threat.. to the enemy. So technically an army of AI powered fighters could be used in warfare. But in that unlikely scenario.. and I mean unlikely in that we are a LONG way off being able to make a fraction of that achievable.. certainly not in our lifetimes.. such an AI would still only be able to do what is was programmed to do, it cannot develope congnitive thought and “take over”. It is a program and the end of the day. It boils down to machine code, 0’s and 1’s. Binary is all it can ever realistically process and it can never diverge from that. So the danger in how AI is used as we explore the capabilities of AI. Not the AI itself. In the words of Walter white “I am the danger. I am the one who knocks” , I being the human entity who uses AI for whatever shady purpose they have planned
youtube AI Governance 2024-06-19T08:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugwb_q_uO01AcgQ9Rit4AaABAg.A3wxOBKhQflA5NQSzz2rp1","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugz0qxmpDNE2n37m1gl4AaABAg.A3uwOK7SdvNA3uxC8vb7_C","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugx03nrx4iutuHNQZ6t4AaABAg.A3tm_l8jEp6A4jyQOuPeab","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyROUkVltxNd6L7iqt4AaABAg.A3qox3VRms8AQg6hgdxXgp","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugzp0m-b2ji_pH9HAhZ4AaABAg.A3nEKigNoDdA4DPs-BSjj3","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwuHuXbmEo52up0Anh4AaABAg.A3l9nLnWS9yA3zTMB3VjDg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugx5i7PTWaXed3aGynF4AaABAg.A3d0VG0gYBVA4ovYCnazaU","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx5i7PTWaXed3aGynF4AaABAg.A3d0VG0gYBVA4qj1LT51XS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugx5i7PTWaXed3aGynF4AaABAg.A3d0VG0gYBVA4rWVsGBT-u","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwBcUWFtA6O1BwUv494AaABAg.A2in1fBzbRBA3W98UJ3CVb","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]