Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember the scene in T2 where the Terminator (T-800) tells Sarah "I have detail…
ytc_Ugwk37Znb…
G
AI, Kindred, Cern, and Vatican are aligned and involved in what's coming. Vibrat…
ytr_UgxSQMhN2…
G
The west is known for exporting ideologies, they will use AI for the same thing.…
ytc_UgwAq1fc9…
G
These are the senerios that I see,
1. Ai bubble bursts, 80% of the stock marke…
ytc_UgyGHltxy…
G
He looks like a guy in the movie that get killed by a robot that he made…
ytc_UgyuJRtmb…
G
One does not need to feel like n order to be. The AI, is a narcissist. It’s very…
ytc_Ugw6LxghL…
G
Artificial Intelligence is to actual intelligence as an artificial flower is to …
ytc_UgzVUT_5b…
G
My brother in Christ, the whole of social media is one big recommender engine.
…
rdc_kojuym3
Comment
This guy's organization is partially funded by Peter Thiel and multiple anonymous cryptocurrency investors. Despite the dire warnings, I feel like Nate is way more focused on what-if and not what-is. Will we eventually develop super-intelligence? Maybe. Are we currently dealing with an AI crisis? Absolutely.
We would need a massive paradigm shift in how AI works in order to genuinely model reasoning. Right now AI are still using predictive text for their own reasoning, which means all their "reasoning" is just generating questions from what the most likely questions would be. These systems are not intelligent, and there's a genuine danger in treating them as such.
youtube
AI Moral Status
2025-10-31T01:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy352lDkj3E40ABTPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX7eo-uBkMrZ3D9zl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyqEnkkOba6Rc-0kkB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykaBsAKWzANf78_nB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7PXWuFqtYSuAETC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5RvOiYN8A2YddYUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2P55-9EZRxrm-s9R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEaO7SUA096JPSxB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEX_FhsbfY0EuN3l14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztzVvcq-E-XJa3_Jl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]