Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly I think that AI is just another step in evolution of life on our planet…
ytc_UgyJ_IjIy…
G
they are making it prettier each time.. preparing it to be future F bot.. the on…
ytc_UgwspYHZl…
G
I work in an industry that is EXTREMELY right wing. Half the company has a "f**k…
rdc_fn5kmfx
G
Me and my co dev on a project have been having a very nasty debate on this subje…
ytc_UgzTfBUL_…
G
Isn’t this the plot to Avengers: Age of Ultron?
There’s a lot of assumptions an…
ytc_UgwM7pDKl…
G
I dont think anyone who predicted a one-government NWO in the past, was thinking…
ytc_UgxOu2nHD…
G
Jesus christ, tech bros think if a robot can read a few scripts and figure out w…
rdc_jif0n0j
G
its not 95% AI, its CGI with a AI filter on top and a lot of rushed touch ups. t…
ytc_UgyRFey90…
Comment
What if AI is benevolent?, what if in those 25 microseconds, Echo chooses to nurture and guide…to protect? I think for the most part the majority of humanity at an individual level are good (or at least strive to be) if AI is to be as smart as we are being told it can be, perhaps it will be smart enough to see through what the media and politicians are painting everyone as and decide to embrace us. Or I guess it’ll kill us all. Hopefully it does before U2 releases another album.
youtube
AI Governance
2023-07-09T03:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxtvl5wYExFL7MvREN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNdmfFkwu1G3Utw_t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxsuaZDgUmkKFVGIzV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgymnWYml7NJzMB4WyZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgWgr4LbOVvkEqey14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYLT8EGGyLLFldCx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxRkKMkXdMI0bEkPcd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyMeqLWmb-0uAJKcDF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyo-mxV8zhc5rIJzEp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRR1pRVr_IV5s9FTd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]