Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI’s replacing jobs with repetitive tasks like accounting and data entry, causin…
ytc_UgyB4UCJp…
G
This clip offered no insight to ChatGPT whatsoever. He even started by saying "I…
ytc_UgzZ169O8…
G
Thats messed up, it should automatically stop answering when talking about murde…
ytc_UgwUFGgSm…
G
The biggest irony is that im pretty sure hes opening himself to massive lawsuits…
ytc_UgwNd2NVL…
G
Ai and artist are not even remotely close to another imma call ai stans button p…
ytc_UgzQi3IYy…
G
I don't understand this! How can a hairdresser or a beautician easily replaced? …
ytc_UgxnnX_e7…
G
At some point you have to wonder what AI would even DO. Why would it really need…
ytc_UgxW-mKdc…
G
They are working on AI taking over the human brain using existing technology lik…
ytc_UgyNXyni4…
Comment
If you think we are currently living in a simulation run by higher intelligence, what makes you think that a superintelligence created by us would kill everyone? If we believe a higher intelligence exists already, it hasn't killed us so far but rather created us. Similarly, if we look at how we have behaved towards animals (as a lower form of intelligence) - sure, there has been a lot of killing, but I would say there is a tendency towards wanting to live in harmony with nature and that is part of a learning process. Superintelligence would learn much quicker, so it would probably not depend on killing for domination. And who even says domination would be a desirable goal for a superintelligence? It might be naive, but I have the feeling the assumption that AI will kill everyone in the future is very much predicated on human behaviour and rather limited, human ways of thinking...
youtube
AI Governance
2025-10-09T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRNzClKEhbB1pK99B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzF2Y3ugnbV8EFz41R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyaTULpdOuR6e9_f3x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEQBJQZ5o-epBJB1F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjP7A_QFfiF-iNtx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzR2l4OZ_-BgZ22In54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyNSh4XfCiKAx1pfox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxsvIjn7nEemSsClD14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaNfLwqVJ54vAeSZt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxfAlFsbmd6xiVEz3d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}
]