Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I actually use Stable Diffusion (on my own PC) to generate references (from my o…
ytc_UgzbswwMP…
G
AI is all but guaranteed to turn against the human race at some point in the fut…
ytc_Ugx8d_D8o…
G
I find it hilarious that people think that Ai companies that cracked human langu…
ytc_Ugz3BV224…
G
AI is going to decide what is information and what is misinformation. Great. Tha…
ytc_UgwW0srZc…
G
Your hate-blinders were not on, those ads are cringe. Extremely so. The only use…
ytc_UgyG9cR0b…
G
Keep in mind there are some cases where AI could do good, it's not this big over…
ytc_UgzgLZ9QN…
G
Nobody would program sentience into a toaster. An AI would be programmed to cont…
ytc_UgyBeFdGt…
G
Stop USURY and stock market speculation, and we stop AI development in its track…
ytc_Ugw1lCFxr…
Comment
All AI needs to implement Asimov's three rules. The first law is that a program shall not harm a human, or by inaction allow a human to come to harm. The second law is that a program shall obey any instruction given to it by a human (unless it violates the first law), and the third law is that a program shall avoid actions or situations that could cause it to come to harm itself.
youtube
AI Governance
2023-04-20T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDs_G6yMuMR3rbUBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSJE0OobKT6yTGKcB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwfiqdkDG_KRiluIth4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8cUoXObSig4XdPu14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyV9t7DmDGWWEHnhAl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgykD5GjhgaE6QT38i14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugwb5caVnaJyQP5nCj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzHxEgJ913rqAYzpLh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUrs36AIgKToIgcZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxFtKeubJNiA859-Bl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}
]