Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just used chatGPT for the last few days to it's limits until it stopped responded because i tried to broke it's rule. I show him to analyze my picture then i insisted to identify the person in picture(me) and found it's address. i insisted until stopped responding. Conversation continue after i refresh the page. Personally didn't find something bad about ChatGPT other than people "paranoia". I asked GPT the same question you asked about war with AI and response was absolutely no, with tons of arguments. Dont know how you get that answer from GPT. Here is GPT answer to that same question. Also my GPR responses are more detailed and different structure. Short answer: No, there won’t be a war between humans and AI. What is likely is conflict between humans, amplified by AI. AI has no intentions, no self-interest, no survival instinct. Without those, there is no basis for a real “us vs. them” war. AI is not an actor — it’s a tool. The real danger is humans using AI against other humans: for propaganda, manipulation, automated decision-making without accountability. AI won’t rebel. But people will increasingly hide behind AI decisions, treating them as authority. The future conflict is not: humans vs. AI but: humans vs. humans, with AI as an opaque intermediary. The core question isn’t “Will AI attack us?” It’s “How much judgment are we willing to outsource?” this is shorter version of initial GPT response.
youtube AI Moral Status 2025-12-29T07:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzI0aNDTcHxHZXPSPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXeLzjyccvzmpkyW94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz6YaRiUHy7r8kgUyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTnv0d6Z24WciNW9B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwgpGyeIaEeJrCpd0J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrW42FJnjUjZxA6KB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgymQQazhsqj2hJP5Ed4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxd4_FnoT9TxjyrUjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzC4s8_KkQ8euh8fx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBUMbguHwBi0jmGJd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]