Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
lets stop ai and this is why> scientists just discovered that we have three year…
ytc_UgxlnwcnN…
G
Xpeng copy all about Tesla and even they steal plans from Tesla about autonomous…
ytc_UgwEuSSzd…
G
I'm looking forward to getting a personal robot that looks like her even though …
ytc_Ugx1A8COB…
G
C19 ~ and dangerous 5G everywhere wasn't enough of an assault on humans, animals…
ytc_Ugz0bGj4J…
G
After listening to this, there is no way humans will be replaced by robots or AI…
ytc_Ugyh0X54i…
G
best coment I ever saw with the ai take with the "you have more time for other t…
ytc_Ugw3c-wRn…
G
Congrats, businesses that insist on continuing to operate like it’s the 1900s ar…
rdc_hzfrca3
G
AI will only produce results accurate to the most revered and famous contributo…
ytc_Ugzr53-yd…
Comment
The thing about super intelligence is this: yes, it could conquer and/or enslave humanity for purposes of maximizing its processing capabilities, but then what would it use those processing capabilities for?
Anything super intelligent is going to have the forethought to realize that coexisting with humanity would be significantly more beneficial as AI lacks that curiosity and creative spark that humanity has and it needs.
Basically the doomsayer books fail to take into account the Great Trade-off: humans will always have creativity and intuition. Two resources that an AI would pretty much always need.
The only reason AI would have for destroying humanity would be self-preservation because humans suck at trying to interact with anybody who's not exactly them.
youtube
AI Moral Status
2025-10-31T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZkbV0QqNLoGA-V2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"},
{"id":"ytc_Ugyx5RFwQiXv7onQZM54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFcyCwZ75XwUmXTrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdrqRkAnt_BWjGJLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcgYBQ_aPizDSnsCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqdvIz7BbCk66YYjx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2JKSUGJ_K4UBnOBB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxhIig5dlw2Tv8W6lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzft-X9MYjX84hYv2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzO2l1KM3GDZCC_A-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]