Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT is just lying confidently and doesn’t really regret it because it has no…
ytc_UgzuaTIQs…
G
Just like how humans need to do the same thing. Why is it that humans warned of …
ytc_UgwS2Yxys…
G
But why would you ever assume that AI has a world-view? It's literally the human…
ytc_Ugyskjoqy…
G
if AI captures all jobs or higher percentage of jobs then who is gonna get paid …
ytc_UgzA4FmMl…
G
To be honest, I don't think sentience could be achieved without self-thoughts, s…
ytc_UgwVjTkEX…
G
You’re trying to frame “vocal processing” as deception — when in reality, it’s b…
ytr_UgxN-tI2R…
G
there are some experts who believe ai is likely conscious,would you say the same…
ytr_UgwfuooNe…
G
This is just a way for Google to get around privacy rights. Program an AI to say…
ytc_Ugyyr_lBz…
Comment
I am convinced that artificial intelligence in the form of highly advanced robots could one day replace humanity. Once they begin to pursue their own goals, humans may simply become an obstacle to them. In such a future, these artificial beings might start to colonize other planets.
The next dominant form of existence would no longer be humans, but intelligent machines spreading throughout the universe. Humanity may have created them, but at some point it would no longer be needed. And this, I fear, is exactly how things might unfold.
youtube
AI Governance
2025-11-25T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzUrcNllSXSUIY86Sh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8IsFwLrZiwuENxfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzMzNU6oqk-nYOyaR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyWAkqGo4CnaoMktNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNYdzu1ew3dqWjj2B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzXcz-3UWueE8K63_14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGAba9ZCjcCm9RQQl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx49xXcDXsOGnwTApl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6biFSX7CXzpVjMEp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsdC7ZkngF2InQHgd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]