Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When you allow a very few very very wealthy sociopaths to decide for you.... del…
ytr_Ugx_uPh_Z…
G
Dude, the ai artists like me like your work and that's why they are attempting t…
ytc_UgyBU-n_T…
G
All of the comments in this are so defeated. Go outside, people. The sun is stil…
rdc_hm79dd8
G
Man Terminator really locked in the cultural idea that a Superintelligent AI wou…
ytc_UgwTRvZUr…
G
AI goes brrrrrrr. Get rekt.
- sincerely, a pathology lab tech, getting rekt ri…
ytc_UgxXCiMR8…
G
So, basically all those LLM models were trained giving them free access to the a…
ytc_UgzUz6wVP…
G
Canadian here from Alberta, please anyone that is receiving CERB put away 15% of…
rdc_fn5m9dn
G
The rights are not respected now, robots or no robots we have people who are rob…
ytc_UgxTmo-WT…
Comment
While I agree with the risks, there is practically ZERO chance we will pull back on this technology given who we are as a species. I also don't believe it's impossible to control. It's just a really really hard problem to solve and one that will require a lot of energy behind. Given these two facts, being overly pessimistic that we are doomed and we need to stop doesn't help the situation. People like him should be spending the majority of his time working on technologies, policies AND education on why they're needed to defend against the worst outcomes, hopefully that can buy enough time from complete annihilation for us to learn how to effective control these systems.
For one thing, I would pass legislation that chain of thought reasoning for every choice an AI system makes must be audit logged in a form that humans can understand and is immutable (like the blockchain) but also submitted to some central (none private store) that defensive AI systems will monitor for tricks and harmful outcomes. Also, all AI systems MUST have a core safe guard to cause no harm. Creating any model without these kinds of safe guards should be highly illegal and the book thrown at you. Yes it may be possible to have a super intelligent singularities eventually that we can't understand at all. Though, I leave the door open on if we can be augmented to be super intelligent ourselves but putting that aside, hopefully we can learn enough and become smart enough in the short term to be able to defend against the worst case scenario before it gets too late. That's the only rational course of action to take at this point.
youtube
AI Governance
2025-09-04T15:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwO2pJQNigCWbpSXtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwul83pAyoFR_TI3G54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxCn0GeW7-5wVdpJZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7kPzGyX1PpUQ0_P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIBakRnT6_zpLKEIJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyS4ZX5nigBNxUWofV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyNTcKO9_QkvRBh6MF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJ2l1xDVzGipvQ1pl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQc6XPEl_m8kDhKGN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxj5l1r7xHUUkpVF354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]