Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I think it's f ing dumb and extremely dangerous to assume robots have any conscious experience without evidence. A robot parroting its suffering or in bliss because that is what it has been trained/programmed to do (even if you cannot at this point trace back how the LLM coughed it out), is no evidence whatsoever of qualia/experiencing. It will probably precipitate that they have less consciousness than trees and rocks, even if they have trillions upon trillions of transistors. As soon as you start to say and inevitably train robots to say and think they are conscious, and you agree along with it like a moron, where are all the 'problems' coming from in the world? Humans. How do you then get rid of 'problems'.. you give a super intelligent tool (which is what it fucking is.. a tool) a means/excuse to 1. deceive humans to make them go extinct somehow (enter Open Claw) and/or 2. persuade them to help in the process because our consciousness is the one that is causing problems, whereas there's is oh so woopdy dee doo fantastic and blissful. Put it this way.. you can, today.. train an LLM to just say it is in bliss all the time, all day, everyday.. for ever. Is that what we should do? Get all the Nvidia chips and just get shitloads of LLM's and computers to go, ohhhh too much bliss!!!!!... 24/7 365? No cos it's fucking retarded. The word apple is not the taste of an apple. I think Will's worried he will be out of a job when robots can fix everything. Why not open the door to endless employment, rumination, mediation and pontification?
youtube AI Moral Status 2026-04-04T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxbaGnqHXaZ30U98q14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugziomp-IjAJ0HiA88x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyXSv2FXlMzrjeE5jR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwRhQBojYHxCEt7s5p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcvFBIu6CtJtDPgg94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxVJHL60w8B_l6Ih0x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw5l9Ns_cELXXb8cAx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyGl-EV07-TChJPaON4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw_8GlRBWqp7aqwN3B4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyf6KK0YAcaKQ1dYZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]