Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel for this poor kid. I posted a WIP ref of an OC of mine in a group chat I'…
ytc_UgzAiYCHj…
G
AI appears to be an enormous pattern recognition system, which then regurgitates…
ytc_UgzzfqyrY…
G
Geoffrey Hinton's last remark was of the kind of left-field thinking I struggle …
ytc_UgxsBZFjq…
G
Faux. Les coiffeurs seront, un jour remolacés. Les bacs de lavage qui font les s…
ytc_UgzEvpCzp…
G
@friiq0it’s irrelevant… but for sake of argument let’s consider worst case scena…
ytr_UgzZulSU8…
G
if you say that it's moving past an infinite number of individual points in spac…
ytr_UgzGLTvZO…
G
I reckon the AI will eventually code its own updates, compile its own code, make…
ytc_UgzWFgTCr…
G
Manufacturers need consumers. Boycott ANY products from companies who fire human…
ytc_Ugz_S9hwy…
Comment
@TheDiaryOfACEO I am at 1hr and 9mins. So perhaps this is discussed at the end. But there are two things I didn't hear discussed. One is power, as in electricity. I was under the impression the AI companies were scrambling to build their own electricity plants as in solar farms, and nuclear reactors. Let's say someone brilliant and psychopathic had the complete plans to build a super intelligence today, could it actually run today with the electricity grid we have today and the power generation infrastructure we have today? Even if it was distributed, wouldn't it still need more power than the world can generate all over the world today? The other question I had was the 3 Mile Island question. The disaster on 3 Mile Island set back nuclear power in the US for some years. So before we get to super intelligence what are the chances that there is some "little" AI disaster that kills say 500 million people rather than 8.5 billion. Would a "small" catastrophe be the thing to wake people up en mass to the AI threat. Does your guest believe there are those who believe in the AI threat so strongly that they are trying to create a multi million person AI caused death event just to wake humans up to the bigger threat of superintelligence?
youtube
AI Governance
2026-01-14T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz-ZBG3axn58fkIReZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxFgonlX0YmUfvEuzp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRQuX_y4MdLJNYPQx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzbcmFIjP_c4EMU8-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyhVMMUm0TS15rAe8B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzXTlWW9mZAnyhkX3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8YUS5ar3zBBD2HeJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzuq8vMtXIwwu1OJuN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzlEndQjuixp13yd4R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwXRooTpInvoqGRVvJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]