Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
and this is the same type of software that they are using to deport people inclu…
ytc_Ugwe_5jUR…
G
This is complete rubbish! You need to understand liability. When a doctor messes…
ytc_Ugw8ZFGpa…
G
WTF is the question if China, Russia, or Iran are gonna make some virus using AI…
ytc_UgyEecRdE…
G
@ContaDeestudo-t5w AI is inspired by the work of others, just like an artist who…
ytr_UgxexVPGh…
G
Based on your description of the case, I was wondering if McGee would have any r…
ytc_UgxZ2O-k-…
G
Also, I think AI art could be useful for generating concepts or mixed media styl…
ytr_UgzmS80_O…
G
I work for open ai. It’s not “tricking” chat GPT. Chat GPT is trained to never f…
ytc_UgyiNhEhA…
G
Legal protections are necessary for stolen content, but outside that, it's a mat…
ytc_UgwYUxzzl…
Comment
I don't think that's why they fired Sam. From what I read, (which was admittedly just a quick google search) it seems there was an attempt at a takeover of the company that backfired. The safety concerns where about the company, not the AI. AI does have the potential to cause harm to humanity. Its no different than anything else we have created. There is a reason why rouge AIs are often depicted as they are. Some flaw in their system mistakes protocol to protect us from harm as protecting us from ourselves. They often bring up war as it is the most common way humans kill each other in masses. But there are many other, much smaller things we do that lead to harm. We make paints that cancer, toys that cook children's hands, cars that do over 200mph, but are only crash rated ad about half that speed. The common theme here is ignorance. Its something we all have at some point in our lives. The belief we are right because we don't know enough to know we are wrong. Ignorance is bliss, but knowledge is power
youtube
AI Governance
2024-03-23T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQFqCUYkSSwUnCf6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8mc8lUy8dBWDNfaF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyE6e9kz8_JwZjceZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNa32gZdrpue0NwC54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZm4PD1zqQR8R4MGB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5nuo8zQ-DMwgHoDB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBhIovgvn7zIlb11J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE-RswdhXOuU7jxZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIwhme3ym4GbBICrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwPdI7SVoE5DwIBvXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]