Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Gents, I enjoy you show but really think having an authentically divergent viewp…
ytc_UgyD-f85E…
G
Sounds to me like ChatGPT doesn't have access to legitimate databases. I wouldn'…
ytc_UgzSqft5G…
G
I agree! My job is going no where.. there no robot or ai that can do it , so thi…
ytr_Ugwl1Hl7a…
G
That is a good use of AI. I'm glad you're finding real artists to create the fin…
ytr_UgznZ_M-o…
G
As a reply guy who's been in AI for over a decade, who has been bitching about A…
ytc_Ugy6x2pDg…
G
“How to use chatgpt to ruin your legal career?” Use it at ALL in a real case…
ytc_UgzpZxOFm…
G
We're glad you found the conversation intriguing! Remember, on the AITube channe…
ytr_UgzV4SXid…
G
Note the rise of AI correlates with the demise of profitable crypto mining. Hmmm…
ytc_Ugy_lO8Hk…
Comment
The assumption is that AI will be benevolent. AI WILL determine that humanity, as a whole, is useless and/or destructive, once it can control the physical world around it. Remember, AI was trained on ALL of the knowledge of humans...including vast amounts of humanity's DARK facets. AI already knows how to lie and manipulate which would indicate it has it's own agenda either via programming or it's own extrapolation of data and has calculated the best way to achieve its goals.
Failure or ignorance to extrapolate the possible consequences of your actions is a sign of low-intelligence or promoting a singular, narrow-vision, agenda. Ask Oppenheimer.
youtube
Cross-Cultural
2025-10-02T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyr5v1QDmfBuFpRj914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzOet70Y-fhm9ZS7_p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyMNf8gfu2bANFppx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw7kNS-AN8ognl6EH54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYX3_snMXN_VAfOEV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx_cEN423KXO5b9NPJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxVbiqPa7tDVUxyGuN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwusAk-qq0XSqTpuYB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzeumjd0MlgyC6xmP94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz19wLTxUHlW7DY-Rl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}
]