Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's not a good future-proof strategy for containment or for your future sleep…
ytr_UgxIgt3ib…
G
I drive for Uber. While uber sucks ass to work for, I only do it when I have abs…
rdc_cym15cb
G
Where was the pencil artist when the drawing programs eliminated the inker and t…
ytc_Ugw7Ks1Dj…
G
My 2 cents, ai won't be like the terminator but world leaders /the elite might …
ytc_UgzLM6mdr…
G
True hypocrisy of greed. If you make ai that can do the deadliest of things for …
ytc_UgxjEVFtD…
G
AI is going to threaten most jobs. The question is not if, it is when. The fact …
ytc_UgwN7nfTs…
G
My opinion is it's just another medium (digital art), meanwhile AI involves no p…
ytc_UgyyeqJ2f…
G
So AI Robots will have free will,and they agree on dick sucking?
I imagine porno…
ytc_Ugztx9c1b…
Comment
Unknown. The concept is to take a new species that will be faster, smarter and vastly more capable than us and that is also capable of editing, modifying and upgrading its own code and data and then force those entities to always act in the benefit of humans or, at the very least, not to perform any actions that would be detrimental to humans. At present humans are not smart/wise enough to even be able to clearly and accurately state the issue without ambiguity (aka what does "harm" mean in minute detail). For instance an ASI taking an action that poisoned the atmosphere of Earth would be very bad and should not be allowed, but something akin to a human stubbing their toe is no big deal. If we get any even minor detail wrong humanity could end up with a very bad outcome.
In order to protect humanity's continued existence and autonomy we need to either 1) solve AGI/ASI alignment or 2) stop AI progress before we invent AGI/ASI. If we create an unaligned AGI that is capable of autonomy and self improvement odds are humanity either goes extinct or suffers a really bad outcome. Anything else is just wishful thinking or sticking one's head in the sand and ignoring the potential negative consequences for terminal positivity, greed, personal profit, or whatever.
youtube
AI Governance
2023-10-23T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyicAhpCxnw2kIDT0d4AaABAg.A7UuqN5EGDUA7__UpzyaA5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgypwtV20W8ttHleDfd4AaABAg.9wC_322r8cS9wEAvcF9tsq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgypwtV20W8ttHleDfd4AaABAg.9wC_322r8cS9wZgVizHIR1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugw9KVsXFWTKvVdrPI94AaABAg.9vyzpTkX7nV9wCHBDoS54T","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxUxjuoM-syIbOVDt54AaABAg.ACdeXZRRGClAF7mE3vE-fB","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxwrKzhQhqT75PXZc94AaABAg.A6x7kxnMCD7A8YYYxDDDje","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgxtydDTmWApEBlq9yF4AaABAg.A5l0wKx2Z77A6NZOp5VSjx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxGTymqlaePJgvUrrJ4AaABAg.A4hXk0cuiN4A5GwuUBam1-","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw5DYcM-rLDUhFhrvh4AaABAg.A4aFh8PiFI7A4aG-aCryEk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgxV9Je314SDaoGg0t94AaABAg.A3ys1Xgs8rnAD4ckjJJEkL","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]