Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What will AI do with theological ideas? Challenging the faith or the unknown que…
ytc_Ugz9zM_Qp…
G
Imagine In 2055 When We Have AI Cashiers And They Wanna Profile A Specific Someo…
ytc_UgyUh33a8…
G
AI is about to eliminate many people, mostly the ones relying on it too heavily…
ytc_UgzTs-UxL…
G
I can't believe Isaac Asimov saw this coming 21 years ago, actually 85-75 years…
ytc_UgwmmZOxU…
G
Algorithmic bias is a real threat but if the cops can overcome that then fair pl…
ytc_UgyblHVwo…
G
7:10 Whoa there my guy, did you just say you can accurately distinguish between …
ytc_Ugx5xFKsP…
G
And still using the microwave. You are technically doing the work I just have to…
ytc_Ugz_1q6W8…
G
So is AI replacing the government workers too? That would be fantastic to have s…
ytc_Ugy6_aRK3…
Comment
AI is dependent on considerable hardware, and considerable power. While little focused baby AI tasks can be put on small systems, the idea that Ultron AI can just duplicate himself on little systems everywhere is fiction. Unless we make AI robots that are trained to make other robots there is limited danger. AI cannot just replicate itself. We can unplug it, blow it up, turn it off. I use AI daily. It is brilliant but really stupid, and current models (Chat gpt, Grok) do not really learn by themselves, they are trained. They are completely limited by their highly controlled training. Geoffrey Hinton totally discredited himself with the Musk/DOGE assertions.
youtube
AI Governance
2025-06-17T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz6v90t4AYoARYGlTx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIL_Was7PgnrcqkCx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwoYhWLDCZaKclt4lx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxI2pt2w4mJ67iu6694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYcdokzu1zXHIDV2Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgysYMQCb4Hm5aH2Hg54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgRPB1qdQkVaGOfK14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy99dODp5tYC6t5xoJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx581l1BB1XvuIS2Il4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHjH4Y3qiDJ4gfB5x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]