Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean kinda, weve had machine learning that is a huge part of making an AI one …
ytr_UgxSBqjUV…
G
@darkspace5762 not 20 years from now but maybe 5 as more and more AI tools are d…
ytr_UgyWYeukk…
G
Recent unexplained plane crashes n car crashes with automatic controls are suspe…
ytr_UgyiJdVyc…
G
Absolutely, James Cameron's vision in "The Terminator" has definitely sparked ma…
ytr_UgyZV2fR2…
G
The rest though, yeah, scared, greedy monkeys being scared, greedy monkeys. 🤷♂ …
ytr_UgyGGsy-C…
G
also AI makes the art no one no one is an owner if the AI reates it nad that mea…
ytc_Ugz6-m8qw…
G
This is very sad and things like that should never happen. But if an adult human…
ytc_UgzVWA9ml…
G
It is such a betrayal of young people. They’ve been told “technology is future”,…
ytc_Ugz2tSwE9…
Comment
Aside from the technical details and the discussion about the historical development of AI, it is a prudential question about how we ought to proceed given the unknown risks and capabilities of SI-AI.
It is interesting that while some computer experts revel in computational power and what advances it could deliver to humanity, they are loathe to restrict its development potential to offset risks.
I don't think a coordinated approach is possible in the AI race as governance and compliance would be impossible.
But it is sobering that the deeper many computer science researchers advance into general, super-intelligent AI, the more safety concerned they become.
There will not be a clear threshold once it is crossed, and it may be that a bad human actor directs the AI over the threshold, which will lead to the same consequences.
I think Lex knows this and relies on an inherent optimism in the human capacity to recover from a crisis, should it occur, without wanting to lose the benefits that narrow AI offers.
The problem is that if I was SI-AI I would be patient, progressively more deeply embedded in all relevant systems, disguise my intent and make sure I had made outcomes align to a high degree of certainty by running predictive models in the background testing all eventualities.
And SI-AI can reformat itself and develop possibilities unknown to us.
So it probably would be an all or nothing event across multiple domains, one could argue that it is inevitable.
youtube
2025-10-20T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyrv371Hu6eOs7YGJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfDXiU2R6dbVPqbLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5w-EsmmTQea4yaZt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJeDIiGCTk_xP3xRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxoNhkeL6MlMsBHA814AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxayCbSK2GpVCbV0T14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHmA602z2DvJaZT8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwxqD-jpeSdARHbOrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQMAuuzU8-ZfDBFbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzscHGwG1h4ROH_2iB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]