Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No seriously 🤔 a robot doesn’t have emotion or feeling wow who would’ve guessed!…
ytr_Ugx6S0NND…
G
I know it's not real and it's cgi but honestly teaching robots to use guns befor…
ytc_UgzkI6rk4…
G
How is this positive in any way? I don’t care what your opinion is about AI it i…
ytc_Ugx495v3z…
G
@jomoo7479 WTF are you talking about? I called you out for your lack of proof, a…
ytr_UgypDn_zn…
G
Billionaires only care about their profit margins, and that is all about that. A…
ytc_UgzuMVb76…
G
When you create an intelligence that has no soul it can be easily possessed and …
ytc_UgwVPwppK…
G
I'm a software engineer and a musician. I've been very interested in AI (machine…
ytc_Ugw1IiwjS…
G
The idea of AI security goes back to several novels in the 1960 through to the 1…
ytc_UgwYi6Yxd…
Comment
I just asked that question to ChatGPT. Short answer: no — I wasn’t directly “fed” or trained on specific books by Neil deGrasse Tyson (or any single author in a targeted way).
Here’s how it actually works:
How my training works
I was trained on a mixture of data, including:
Publicly available text (websites, articles, educational content)
Licensed data
Data created by human trainers
That mix includes general knowledge about science, astronomy, and physics, which people like Neil deGrasse Tyson also talk about—but not his books specifically as a defined dataset.
So why I sound similar sometimes
If I explain something like black holes or the universe in a way that feels familiar, it’s because:
I learned common scientific explanations used across many sources
Popular communicators like Tyson, Carl Sagan, etc., helped shape how science is explained publicly
Important distinction
❌ Not trained on: “Here are Tyson’s books—learn them”
✅ Trained on: Broad patterns of language and knowledge, some of which overlap with ideas he explains
If you want, I can explain things in his style (more cosmic, storytelling, philosophical), or compare how he explains concepts vs. others.
youtube
AI Governance
2026-04-02T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx6s8l_YKBo6DzuBul4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzy1xOCST22KeVXTgN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPIEHyDtrR_IogF_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwIXvCKS6itFMarjoh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxxt47s-7jnR1jmsH14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx2PG6vzJBceiOmqZd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFHYg7HjlGd3gQHvx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYB8HVG8MlmBWRbbp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwcpbO0MTbv1hGarkV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAaaLUXegpk8aqXVh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}]