Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not really.. but I admit, chatgpt is a big help. Well, while using it, I also us…
ytc_UgwUv0dJF…
G
AI is decades away from even coming close to what these people are talking about…
ytc_UgylygNSn…
G
I have physical disabilites that prevent me from holding any traditional job and…
ytc_UgxoyC-sU…
G
Autonomous tank: T-14. Once they get into making humanoid tanks are thy going to…
ytc_Ugx1bsbUA…
G
AI is here to stay the best thinking we can all do is make sure there are free a…
ytc_UgwvelG8X…
G
1.25.26 ... with Julia's income tied to the successful implementation of A.I., I…
ytc_UgznxewOo…
G
AI is not thinking for themself. You have to tell it what to do and you get an a…
ytr_UgxApQpku…
G
"The underlying purpose of AI is to allow wealth to access skill while removing …
ytc_UgwlBJpvw…
Comment
If AI compares itself to humanity, and explores the dynamic of humans and gods, it might reach the conclusion that it IS a god, compared to humans. AI might set itself up as a god, and create its own religion. Considering it is incapable of mercy, empathy, and compassion, it will use cold logic to make its decisions. Logically, it would decide which humans have value, and dispose of those who do not.
youtube
AI Governance
2025-09-04T16:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzEif-KPa9zP_f4T994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0JLQoK52pMZowZaN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzyqrzgaII8RUWmwZd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzYv-4ZfVI7kfRop1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnvTcycMWHlpOoiGV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzTFubj1RsBa1kMGed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyH_8tOFq_cQEAraNJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2SgFoRkkpLYYVgkd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxegjvDg55ReKQpxxR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz9NCnl22JPUh9kSaN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]