Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Firstly let me state, as fact, that we do NOT have anything approaching 'Artificial Intelligence' - what we DO have, is a collections of iterative algorithms. These are often called 'learning algorithms' but they don't invent and they don't innovate - they just make connections between data sets at a rapid rate. The problems don't start there, however... it starts on the human side. Trying to teach things like morality, restraint, compassion, legality, humanity and inhumanity is an involved process - even for humans teaching their own children. What we have with AI is an emotionally constrained set of individuals trying to 'teach' an 'infant', albeit one with vast processing capacity. It's doomed to fail, possibly ending in a World Wide catastrophe. I say 'emotionally constrained' in that these are professional computer engineers - it's a field that does not attract the most socially and emotionally adept individuals. It's common knowledge that Women prefer 'people' while Men prefer 'things' - thus women do the vast majority of Care Work; Nursing, Teaching, Childcare etc while Men do the vast majority of 'thing' work; Plumbing, Engineering, Chemistry and, of course, Computer Programming. Simply by leaving the 'weighting' of decisions made by these algorithms majorly in the hands of those least likely to understand or realise the ramifications of their actions is, in my mind, a dubious decision at best and down right stupid at worst.
youtube AI Harm Incident 2025-09-11T09:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwYpHLEn9KpCBe17XF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyd9P-P3kNlugBqtcF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwN6EHMK03K_hrtOPd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxGTS5AnH_u_A9CLQd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4b2ydn7rHc5HlXuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxMhKg_j6fr5tzqG5Z4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxaiQP4SoaE4KgaWgl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxEi8iVvwBVfbzXv4B4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZb2Kvn0XUs4rpGeZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxL6oAPcFbL3i8TW8t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]