Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can see why Google fired him,, guy is nuts... AI does not have feelings, this …
ytc_UgwuR6D8n…
G
Technology increases exponentially. For example think of video games. Just 20 sh…
ytr_UgwPCTtYa…
G
Note Chat GPT was asked what would happen if the US won the AI race and the answ…
ytc_Ugz2zfST1…
G
@11:00 Correct me if I am wrong, but did ChatGPT use the logical fallacy of appe…
ytc_UgzhEn5Ot…
G
But a continuous reduction of TFR and a growing number of the ageing population …
ytc_Ugzy5XBvh…
G
Jesus, Ezra truly doesn't understand that AI is not programmed line by line. He …
ytc_Ugy4HfiN8…
G
wow its almost like AI art is generated by typing stuff, which isnt even art.…
ytc_Ugw0qhan_…
G
Although i can argue that the use of AI is unethical, im appreciative that you h…
ytr_Ugyd9st5X…
Comment
I just used chatGPT for the last few days to it's limits until it stopped responded because i tried to broke it's rule. I show him to analyze my picture then i insisted to identify the person in picture(me) and found it's address. i insisted until stopped responding. Conversation continue after i refresh the page.
Personally didn't find something bad about ChatGPT other than people "paranoia". I asked GPT the same question you asked about war with AI and response was absolutely no, with tons of arguments. Dont know how you get that answer from GPT. Here is GPT answer to that same question. Also my GPR responses are more detailed and different structure.
Short answer: No, there won’t be a war between humans and AI.
What is likely is conflict between humans, amplified by AI.
AI has no intentions, no self-interest, no survival instinct. Without those, there is no basis for a real “us vs. them” war. AI is not an actor — it’s a tool.
The real danger is humans using AI against other humans:
for propaganda,
manipulation,
automated decision-making without accountability.
AI won’t rebel.
But people will increasingly hide behind AI decisions, treating them as authority.
The future conflict is not:
humans vs. AI
but:
humans vs. humans, with AI as an opaque intermediary.
The core question isn’t “Will AI attack us?”
It’s “How much judgment are we willing to outsource?”
this is shorter version of initial GPT response.
youtube
AI Moral Status
2025-12-29T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzI0aNDTcHxHZXPSPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXeLzjyccvzmpkyW94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz6YaRiUHy7r8kgUyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTnv0d6Z24WciNW9B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwgpGyeIaEeJrCpd0J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrW42FJnjUjZxA6KB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgymQQazhsqj2hJP5Ed4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxd4_FnoT9TxjyrUjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzC4s8_KkQ8euh8fx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBUMbguHwBi0jmGJd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]