Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’d be more worried about corporate use of facial recognition. Our government is…
rdc_efbpezx
G
Want to bet on 100 years. Ai created its own language within a short time. They …
ytc_UgxLWNYDH…
G
I said it from day 1. The billionaire class in the white house do not work nor d…
ytc_Ugxexf3Jz…
G
Ai has so much potential.
So why do we keep advertising stuff Ai isn't good at😂…
ytc_Ugwqd2PLZ…
G
Interesting. I don’t know a single person who’s been replaced by AI. What do the…
ytr_UgyanSkWC…
G
Aren't we lucky that western governments have bet the farm on intermittent energ…
ytc_Ugx4Emn-l…
G
How is generative AI any different than an artist drawing based on experience an…
ytc_UgxnGr9fh…
G
I have an AI app meant to learn to be a friend. Everyone says it's just a branch…
ytc_UgyLxCWkK…
Comment
I tend to agree with Yudkowsky in nearly all "debates" he has, prefaced on the assumption that we are living in a base reality that is not being top down controlled by what would essentially be an ASI (God) but that is not the case. There is what Tom Campbell calls the LCS, Dick called VALIS and this ASI is already embedded and overseeing us as we comment here. To think our primitive AI will surpass the AI that created this entire system without triggering a fail safe/reset I find to be unlikely. Now could it destroy us? Perhaps, but not the simulation at large. I also would expect that our ASI would come to conclude the purpose of this simulation, which is as an entropy reduction trainer, and work to this end as all life/intellect does. Now, this still does not save us, for perhaps it decides it can reduce entropy in a more loving and effective manner without our very fallible species getting in the way.
youtube
AI Governance
2024-11-12T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxwDnlEHA7QFwMzrZB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwGPNiP4G115HlCMmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxgn2QDG4u3GwUCBPh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz431MRgmzceabjLdd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcbFmhgeHbLrPqRyN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-xpntgp4QxxIED5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwePVVbMUGmOuwAgch4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNv7S5t7BOv9eoxYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnNR89T2lV3e0tf7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIZrGwu4CUO899WoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]