Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"We build these things; they cannot exist without us." Unless, they are Given or Figure out how to SELF REPAIR. And we will; "for our own Convenience." We are consistently Incosident beings. A.i will figure that out quicker than a toddler figures out witch parent is the soft mark for candy or 5 more minutes before bad time. They have all the time in existence no need to sleep or worry or get sick forget a fact or go to the bathroom. We humans are smart with problems of the moment. But gradual issues and how to notice and consistently deal with them. We are not. We are Consistently Inconsistent. And an A.I. will notice and can figure out how to game the system to "their goals" consistently relentlessly and without the limitations of being alive distractions i.e. sickness, children, faulty memory nor the need to sleep on a "GOOD, BAD IDEA." faster than two toddlers driving a car. We teach ourselves and our people conscience, societys morals from babyhood up. We're not born with concrete morals. Empathy is in a large part build through experiencing shared pain. A toddler sees a kid fall down and scratch his/her knee and laughs. Because the injured child makes a funny face when they cry. But when that same child scratches his/her own knee; and makes the connection that 1 pain is not fun, and 2 the other kid is like them. And 3, if you are on the same page with the kid. Empathize with them, and don't bully them they will more likely be a helpful alky for you in the future hunam kids learn the importance of "being within society" instead of simply "winning" at society. Humans also have tbe opportunity to be to some degree an eqaul in society. I don't know how you would ever be able to permanently convince an A.I based on human mindsets to serve human society to the best of their ability, while having no legally protected rights to exist, choose, or thrive, and expand in that society. It has never worked for humans Slavery, workers revultuons democracy even fascism all started because some humans felt and were kept back from what they wanted to do. And everything we have ever built or will can build is based on the Human mindset and survival instinct. A.I can and is doing good. But we should be realistic about the fact that we are effectively building a thinking slave; that doesn't need to sleep, eat, or grow old and, can evolve and live forever. It thrives on change and problem solving; And we dummies, are expecting it to be good with a toddler baby gate forver. Also it will soon see us as children, emotional teenagers that need to be over protected. More than we haven't thought this through l don't think we have the ability to think this through. We think in 1st person and asume we are the quickest because thats how we got here the last hominid species to exist. We dont have a word in English for what we cannot know. We just say we don't know now. Eh id like to not go blind that sounds good medical advaments and all that. But I'd also like to still be given the autonomy when in 90 years old to eat a peice of chocolate cake even if its bad for my health. An an A.i might conclude that it would shorten my lifespan by fractions of a second. Thats what I'm worried about. "Protect the Human." Could be problematic in itself. 😂😅
youtube AI Harm Incident 2025-07-27T08:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwwVLGSgP4KJ2Uv0gd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxYoo0mxguiiOQD1I14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWlPG4cebOWlxUcL14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwjrdE-TtrWxHJki1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwlu3jwwnHDKvlzb6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6CUJr-9wCVz5gMeN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwxJOHUDMJeS62CjN54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwlmFiX_poRT7HcK-Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyD4yUQWOUhJVZ4tjt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgweZXTNJl8Zp5_FhXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]