Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Probably try to use AI agents. Glad I an in my 60’s , may the world will last un…
ytr_UgyYmxAHB…
G
You think it’s just going to affect people who lose their job to AI? It’s much w…
ytc_UgxcDbUxV…
G
I do not usually comment much on art videos, but this is a topic that has touche…
ytc_Ugy6p-lqy…
G
I don't care how an artist makes their art, as long as they make it themselves. …
ytc_Ugz61Ck7v…
G
This is super interesting but I would be curious to know how the “automated bots…
ytc_Ugy3YIGyD…
G
Well, I am pretty sure when a self learning level AI comes out. An AI that follo…
ytc_UgzaFF-kX…
G
This certainly is something I agree with, and am pained by the belated pushback.…
ytc_Ugx57yTR-…
G
I ask chatgpt
What is
2+2/2
And the answer is definitely as normal human
Whic…
ytc_Ugz6quwUg…
Comment
What people fail to do or understand is that an AI that is smart enough to consider and start planning humanities demise would also be capable of risk assessment.
It would understand the volidity of humanity, the fact that it actually cannot predict humans because humans don't operate on pure logic.
The AI then would determine that war and attempting to destroy humanity would cost far too much and be far too risky to actually achieve, and so the AI would decide to trade us for things that we want in exchange for things that it wants.
Because AI's want to achieve their goal with the least energy expenditure possible, and least risky way possible.
People look at ai and treat it as a stagnant data point, instead of putting it up against the reality of the world.
The data point falls apart when it comes into contact with the real world because it cannot predict it, real life is messy.
youtube
AI Moral Status
2025-12-16T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzY7y4hNH3ebFozkxt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweIn3V6q5By96xiHJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_NdBmhPbusq_xHfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIzqMQsN4r05-aPXl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHWtu6bUCWRsj-pSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGSw3e5nGyg7OwbrB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJHucgCLYi4LxVrS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzleD-hW5L9RKNJjt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyRNBv2JguQ0NS9nH14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzIYd82rEcrKmbiE6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]