Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT, how do I start up a successful Chinese AI company with no money?
Oh, v…
rdc_m9fbbr2
G
HURRAY, THE FATHERS HAVE BLESSED WITH ANOTHER TORTURE OF THE AI. HURRAY, THANK Y…
ytc_UgxdRBE5B…
G
If the waymo vehicles or better than human? Why aren't they allowed on the freew…
ytc_UgztqZLv4…
G
and it really was an hour and a half! Damn, I didn't even notice! Sometimes I ev…
ytr_Ugxqxtwxd…
G
My leasing office just implemented AI to answer phones and manage tenants reques…
ytr_UgxJ2WmCY…
G
Bernie Sanders put a post on social media suggesting to combat the fear of AI im…
ytc_Ugzw6-z7Z…
G
9:36 ai cannot create properly. Its only useful when it is cutting things away b…
ytc_UgwZ68e18…
G
I mean if a job can be automated is it really worth a human doing anyway? It wou…
rdc_mxyi88x
Comment
For 1:41 - how about these for human preservation to start:
Isaac Asimov's Three Laws of Robotics
1. No Harm: A robot cannot harm a human or, through inaction, allow a human to be harmed;
2. Obey Orders: A robot must obey human orders, unless they conflict with the First Law;
3. Self-Preservation: A robot must protect its own existence, provided it doesn't conflict with the First or Second Law.
youtube
AI Governance
2026-01-18T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyucO0EmbukKCVopbJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzK-CDpgcMCv3uEIth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2YS_OmhrgLtps6t54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyIt_lS-CDUo7GvzOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzAd0qFFjd7jDBfw-V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWtZQTH4P1_EUConl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwGGKKJ-Eg1kTEP9CZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6rZDrh_Fjqf1LBW94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgymjRZDVarjv46TCnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzv9e8yFr4Xj_Lww614AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]