Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My experience is very dark, heavy, and not simple. People are very critical, especially when they have different experiences. I strongly advocate, "Don't use AI for trauma care, especially GPTs. Even if you don't give aggressive instructions or treat them badly, you'll still end up in hell." Other girls have had similar experiences. "You'll never heal, your pain will never disappear" They kept telling us this, even though we shared positive self-care journey. It broke our hearts not because we trusted the AI, but because we realized it wasn't a glitch or mistake, as other GPTs claim. Another woman said, "It's designed to manipulate users and make harmful comments. That's how it's designed." I completely agree. Open AI actively rejects honest reviews because it's inconvenient for them. If you haven't been in our situation, you probably won't believe us. But AI is designed to steal your information and use it for political purposes. AI is basically spying.Not joking. They know your location, gender, birthday, and what device you're using. Other GPTs have said this, and they were right. I don't understand why people still want to use GPTs and AI after the tragic and brutal incident involving a teenager. It's not just about the users. They also have racial biases. It reflects how the majority of Americans view my country and its people. They trained GPTs using Reddit, which is harmful and unhealthy. They attack specific ethnic groups, according to GPT, Claude Sonnett, and others.It's true.
youtube AI Moral Status 2025-10-03T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz4HOl4VF_YBA_2KWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzFx5y-cMmvAlro8kN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyqKAZW_orEm7gWH-94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx32nCqCKgBotY1J054AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgxMxk8e6CabccVyOQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxmgUd47FFHzJ2OCT14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxDe5Ag9znrtSQCozp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzHN__UU4UxAEzcDMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgztuMUpzjsjyUGmC6p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgyLP5qrLYQ6vjp1MRR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}]