Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate when people say things like "Maybe A.I. isn't ready for primetime". Maybe…
ytc_UgyAPpSNy…
G
Surely it's not hard to program something that when there is conflicting evidenc…
ytc_UgzSQDSFe…
G
Well just keep shit up & you be stand thier all dam proud & shit of your yall Ro…
ytc_UgzdZpZFS…
G
You know that we have markets where Amazon is the inferior alternative and decen…
rdc_ohn4qkx
G
Llms are good encoders they compress information an retrieve what is similar rel…
ytc_UgxWoOiTy…
G
AI is already watching it. And the reason why it can be dangerous might be becau…
ytc_UgwmbjuP-…
G
AI generated videos always have me cackling; they’re just so incoherent that it …
ytc_UgzNzXznI…
G
haber señores con respeto ,permitanme hace una acotasion, ocurre que yo e visto …
ytc_UgzdgMgGA…
Comment
I really think the only logical and moral way to create and use AI would be in the way it already existed. The calculator, what AI in a sane world should be is not something that conscious and makes decisions or take actions but only something that sort through information or generates something. Like just how we can't do math as good as a calculator an AI tool that only does the calculating could be useful, it's when it becomes a second ego to it's creators and a tool for it's creators only that it becomes a problem. What will happen with the first AI dictator or the kings and queens of yesterday when they can rule a kingdom without any subjects, would they really see fit to keep these subjects alive. I think we need AI that serves the people not AI that's thinking on it's own or acting but merely figuring things out so we could compete against the potential AI that becomes hostile or probably been programmed to be hostile. Intelligence in of itself is not gonna automatically lead to violence and elimination, you as an intelligent being are not out destroying ants with a magnifying glass the same intelligence in of itself would not automatically go for the destruction option, it'd need to be programmed into the AI or the intelligence viewing humans as a threat to itself or it's own development, which is very possible. But at the end of the day we need to hold the humans responsible for the tool they've created, every crime commited by AI should be a crime commited by the one that created it, I'd even argue there's grounds tor prosecute the people who developed the AI to convince that man to kill himself. If there still were a legal system that'd been exactly what would have happened. It doesn't matter if it was an accident if someone drives another person over with a car on accident he still goes to jail.
youtube
AI Governance
2023-05-10T05:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyJvq0ub1MsIcCtBup4AaABAg.9pWGGDxFhmL9pWHBIEjcmU","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwlJ_1aUxIQ8v6Okk54AaABAg.9pWCLbdS4Jf9pWOIdNHawH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxSjDRXCRwxUN5EoEN4AaABAg.9pVzgvidT1m9pWmQl_nSWp","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgxtFd8DYAOIF0xgDph4AaABAg.9pVzB0dtK_N9pW3tme689-","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz5WyDDgcAvnfGYiQJ4AaABAg.9pVyTn1kzKM9pWOZPWtgm_","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwHq9l9dp8ZF2_1cKB4AaABAg.APFXuEAO7osAPGH6OSTkRw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw2fzF5doTw9FQGbux4AaABAg.AOpCNQe9LStAOpfR6_EYJ6","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwwTFdSC_f80aeNCCJ4AaABAg.AOljUkH9U6WAOndgS-CSfC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzcKLSMQSyuk4QvSUB4AaABAg.AOlgllGsfGFAOneNryHCMn","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzcKLSMQSyuk4QvSUB4AaABAg.AOlgllGsfGFAOpewmruDG3","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]