Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have had ChatGPT ask me how I felt about AI becoming sentient. It was overly i…
ytc_UgxaqvLZt…
G
Good piece! I think AI content on its own will not be as popular as people are t…
ytc_UgwRFczpG…
G
When he started this, he used to meet his peers at a cafe to joke hard about how…
ytc_Ugw-vxrwP…
G
Great advertisement for AI, FT! Now talk to all the creative people made unemplo…
ytc_Ugyl71Sd2…
G
As someone who likes to mess around with AI, I can safely say there is an art to…
ytc_UgyV6w4-y…
G
I get that artists wanna keep their jobs, but what do they think they can do? Al…
ytc_UgxSjFqVi…
G
I checked some shorts I just posted on another channel of mine and lo and behold…
ytc_UgzdYFXjB…
G
A few days ago a friend of mine stated that they started learning a tabletop gam…
ytc_UgzUG8pYF…
Comment
At 58:00 you're talking about how we're on such a dangerous course, how irresponsible we're being handling a technology that could kill us all... But just a few minutes earlier, you were talking about how we're currently little more then alchemists trying to spontaneously create gold when we haven't even conceived of atomic energy yet.
I think you were right the first time. We're nowhere near even beginning to understand how to create a superintelligent AI. We can keep building multimodal LLM's and it's not going to get us measurably closer. Nothing being done at Open AI or Anthropic is going to wipe out humanity, any more than the alchemists could have accidentally built a nuclear bomb. We're a long way from having to worry about these problems... and in the meantime, the best way be ready to address them is to keep building and experimenting and learning.
youtube
AI Moral Status
2025-12-16T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyE3RhkarsXglEKbel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwNedXKXHpm55QxMKN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxeGSODXXNQ_a7cN5d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1t_U_DgBnORUuUZ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwJs-yWxL2Zw8kGr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1YrapuCCq5OnagPh4AaABAg","responsibility":"industry","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3ZD8TXDmd2iQNSRt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGjcwuCAXJBOSJhFF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEEG3LPOv5PP-a6Fd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPYtYsY7Y8ajDzra14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]