Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Current neural network tech is just an ability to predict the next unit in a pat…
ytc_Ugzu6yvwb…
G
And that will work until the AI decides it's no longer necessary to support us. …
ytr_UgxMdSyxB…
G
No mention of the fallibility of AI, in particular facial recognition software, …
ytc_UgyY9ISvt…
G
If humans were a good caring spiecies then we would have used AI with the main g…
ytc_Ugx2AK6WT…
G
Sadly, Crystal has the right instincts, but gets all the tech wrong. Yes, ther…
ytc_UgwU0Uze4…
G
When I was school learning about the Roman Empire, Alexander the Great, Genghis …
ytc_Ugws84fqs…
G
AI makes mistakes in the harder parts of the code. Often enough, the juniors wil…
ytc_Ugz1-DKCP…
G
To some degree, we know who is developing AI, even if there is not anything clo…
rdc_je4mn99
Comment
I've been having this argument with a few of my friends recently. We are on the brink of an AI takeover and do NOT have the systems for what happens to society. So, AI + Quantum Computing = Completely insane processing power. Now you've got the ability to replace half or more careers with ai, and automation with it. So what happens when 150 million people have zero job opportunities? We need things like Universal Income to offset the losses.
youtube
AI Governance
2025-06-20T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyqS0KHa9rLuRDmJ7N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"concern"},
{"id":"ytc_UgxrPOUK6AUSfyvMq0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYwkvmsPln3gXzPGR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaB9KGfGdaGQcrjP94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz1b9GcOKGyEZ_zoUR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvbNonrkA3uwvEdkF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxA16LCaTxjQBvaFR94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZY5SdAHSFHnHOnD14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzpOCNNUDRc0f6h5B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxnVaW8bXswnn94hKx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]