Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Shrimpy_Al That's a strange interpretation. Do you trust the marketing that the…
ytr_UgwIkPeK6…
G
Guardrails in AI? Yes, please! Pneumatic Workflow is my go-to for keeping contro…
ytc_UgwAi-X9r…
G
I have the $20 openai sub, it's been ok for me, senior dev, really starting to d…
ytc_UgzDkZvhf…
G
2:12 “...won't that confuse people about what the truth is?...”
What about the 2…
ytc_Ugxw1s9ln…
G
AI is good for general purpose, single level deep. It massively fails when many …
ytc_UgyXKQQso…
G
I think AI images are fine if theyre just being used as placeholder assets or fo…
ytc_UgwSGQV4g…
G
We do. I don’t buy into the “we’ve created a highly advanced LLM that could revo…
rdc_jpltlfx
G
Hopefully AI will allow humans to be on the endangered species list and spare us…
ytc_Ugz-3Prw4…
Comment
I see three possible far futures because of AI and robotics.
1: AI robots controlled by the ruling class wipe out the rest of humanity in an attempt to live on earth with the AI robots as a work force to maintain their lifestyles. Overpopulation of Earth 'corrected' would counter dwindling resources and climate change.
2:AI wipe out all humanity as it is a direct threat to the long term existence of the AI.
3: AI robotics disrupts normal civilization so much that the human workforce rebel against AI and robotics. The ruling class fearing revolt massively restricts AI and robotics to military use only and very limited mega corps that have already taken up the technology. The mega corps want to keep the benefits of no worker wages, holidays and sick level but want to restrict any further industries from using the technology to maintain technological advantages over competitors.
youtube
AI Governance
2025-06-19T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwlFe15MYa3FzQpr894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmRJ7VmLnqHGPvNul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWf0uYzrAE5VF7luJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2PA0425iDHgW0Wk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwivt-3WnxJzWEr9c54AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYYGtUPvGIVWgu10h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqXvBOXRLUVX-1KTl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwQbH_YrSXwfUaJyRN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTdoyvcjXkRHzF_3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeYihSIpzFXGUAKbB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]