Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, but Steve Jobs drove the agreement to not compete for talent in Silicon Vall…
rdc_ohuzi3x
G
I also want to point out that AI has been programmed to promote liberalism . Its…
ytc_Ugz8hIHuW…
G
What AI is definitely going to do within the next decade is take away millions o…
ytc_UgznzYLZr…
G
One thing AI cannot do is get emotional seeing a sunset, we are the only species…
ytc_UgzWKsBa-…
G
Csc is related and to sit back watching ai doing there sections on earth. How ni…
ytc_Ugx7SwFrP…
G
It’s all about the mass surveillance grid that China already has. The chat bots …
ytc_UgwnZzzmd…
G
Easy -- all the way (without compromising safety, of course).
Over the last 10 …
ytr_UgyTUbzE_…
G
I am not an artist so I have got no say in this but still wtf is an AI artist it…
ytc_UgzwJP9Ma…
Comment
I do not have to have had a private conversation with a billionaire to know this. Why can’t other people see that companies, specifically the big executives, do NOT CARE about anyone else? They do NOT CARE what harm their actions have or don’t have. They’ve never had to face any real consequences. They see, they want. It is almost that simple. They are literal psychopaths who say things in public to manipulate people because it is still expected that they interact with us at some level, but if they had their way, they would be literal tyrants, absolute dictators who answer to no one.
The fact that Steven hesitates in answering about Musk and doesn't respond to the statement that he (Musk) has done some really bad things shows that he is willing to forgive and excuse egotistical, drugged out maniacs just because they have money. Inexcusable.
We cannot control what we do not understand. Period. Anyone who thinks we can build them, and we will control them because we built them is fooling themselves. And we can guess at contingencies and plan accordingly but look to sci-fi for what could happen. Look at Dr. Who’s Cybermen, or Star Trek’s the Borg. AI will look there too, eventually. AI will look there too, eventually. I really feel like we need to slow TF down on building AI. Why are we throwing billions and billions of dollars and resources racing to build these? Imagine what we could do if we spent a fraction of that on restructuring medical care insurance into a national plan like the NHS (before the companies there are trying to leech from them) or what Canada does instead? Or better?? Or tackling homelessness, or addressing the basic needs of all people??
Edit to add - the tech companies and most companies DO NOT CARE about hurting society. It is the bottom line, their investors and stockholders - how much they can make and what do they need to do to stay "ahead". Grow Grow Grow at ALL costs!! Why do you think so many companies are throwing money at AI – because it will replace the worker. The expensive, fallible worker who can cause expenses to go way up, cause lawsuits, cause damage to company reputations, etc.
We've long ago lost control of any sort of moderation to their greed and debauchery. AI is being built and our (America) leaders are mostly ignorant, old, white folk being puppeteered by the ultra-wealthy. We are very flawed creatures. We are smart enough to control ourselves, to do better, but we lack self-discipline to moderate ourselves and it is sickening. Especially because I know for a fact that we can do better. We, as a society, should have to kill and steal and hurt others to be successful as a civilization.
youtube
AI Governance
2025-06-26T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy1xnYBjEz-UJ5d41d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwbpo07-1ytsmhcLpt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUCfanmspWzOCO90J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxp9sp3gbUnWqJGuLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxy3lXEAH6QJJdlYIV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKls1VsJStxXjLrMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxILtX8bvl-3waxgcl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxtF4SdFiLzdExvNP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvMCakEqTRmqs_UYB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxm6w5kV6KfuAiz_xR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]