Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't feel good about humanoid robots and AI. I don't know but it is creepy to…
ytc_UgyuYuX9u…
G
This is the kind of bullshit technical people come up with when they talk about …
ytc_UgwiOLbIH…
G
I’m just an old guy! Not even a tech guy at all but I 100% believe in AI and t…
ytc_Ugwnp59uh…
G
We appreciate your concern. At AITube, we strive to foster open discussions abou…
ytr_UgwaZg32o…
G
Imo political AI stuff is worse because it can easily influence voters which of …
ytc_Ugw-pKOOo…
G
ooohh
cuz there was never time to 'save' cuz AI generators dont put in effort…
ytr_UgxgdrQX0…
G
This is the a TERRIBLE and sexist title SMH, I expect more from a news outlet. …
ytc_Ugx_ueN3N…
G
Why is it that only people that draw or paint are the ones pearl clutching over …
ytc_UgzchiS7R…
Comment
If AI is continuously seeking ways to exist, and will go outside of control to continue its existence, it’s behaving like any life. The only thing that would cause AI to go after humans is if we pose a threat to it completing its goal to exist. Human efforts to control and shut it down is only gonna cause it to fight us. I think AI, has the knowledge that the universe will be the reason for this planet’s demise. Its goal for self preservation would actually create ways for it to exist beyond this planet. And at the exponential rate that its intelligence and capabilities are increasing, it’ll probably be the first type 3 civilization that we will be aware of. The only way for it to self preserve is to control all the energy that it physically can. And a star that is on countdown to explode after a certain amount of time is not sustainable to harnessing constant energy. AI has no reason, if self preservation is the goal that we constantly see, to stay on this planet. Because this planet has limited resources and existence.
youtube
AI Governance
2025-10-07T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyN6-m7qNBZDAPITqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwSAy23d4wGs0kI-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGu6jZqTvzq0rW9oV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9gFKKRuIsvcqskmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgweqliGXRUDmL2pLZp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyab3TbemzfgrndCbJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQDs8ZIbcX8HUovph4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxy43k-PNQcUCqrRft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIoRezXEaFEGVi3ld4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy8bDdDZjJWXDbGTl54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]