Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sarah Connor is going to off Geoffrey Hinton with the help of a time traveling A…
ytc_UgzKqBdjt…
G
I asked ChatGPT to generate me an image of Snow White during the Zegler fiasco a…
ytc_UgzPFR0x5…
G
If this was a movie, AI would take this guy out as the first kill while he's out…
ytc_UgxwH9WSY…
G
When I use co pilot I tell the copilot to never never ever touch my working file…
ytc_UgyMYwmdF…
G
Atheist A.I was setting a new stage everytime and hence, having a slight edge on…
ytc_UgytTT4UF…
G
AI will destroy the human race... JUST like the movies... We will not be able to…
ytc_Ugw5HLWNR…
G
Well i cant say im surprised. But money and greed is the down fall of humanity. …
ytc_UgwDbyRCf…
G
Imagine a future when humans have relinquished medical study and given themselve…
ytc_UgzrilBuF…
Comment
Corp greed will continue to push the technology forward at a pace that prevents us from understanding the risks/controls to mitigate. Look at the mental health issues that are just now being dealt with.. especially w/ teens and young adults. The AI risk profile is like Social Media on steroids. Cyber security threats from countries like NK is just one of many ways AI could threaten our future.
youtube
AI Governance
2024-11-10T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxh1bweTr_PglEvli14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzd1g-jYoX6D_iPi-N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7eVvpunjigutKRWJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzW8f8t68JewLoDYnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyadR4WWTndCmYR-hF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgxHZDLfH0SwbuFt4AN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugx2jHIbH9Luhvukkq94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw61rD_8Swx-kO7etN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwCKjKicz0nXpcn0kN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwzfwPoT5Igj74oyE14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]