Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI art is good and meaningful" bro, it's algorithmically placed pixel and sound…
ytc_Ugy0kU-Kv…
G
Funny you should mention that. I work in a large health system (they aren't real…
rdc_j44umop
G
That robot said kill all humams but he stop at hu, I'm a chainsaw so they won't …
ytc_Ugyk0DyPd…
G
The dropping silly dreams part sounds like my parents when I said I wanted to be…
ytr_UgwoklJAS…
G
this is very funny because attorneys are getting disciplined for using AI to wri…
rdc_n5kf4r9
G
I can see where you're coming from! The dialogue really highlights the balance b…
ytr_UgxD3Gnoc…
G
Bernie, you're a pos. If this was the case, then y'all should have listened to Y…
ytc_UgyqK_xq1…
G
Suicide rates are going to go way up. 😢😢 I was born in 1980, and in 1992, I saw…
ytc_Ugz9HpZIU…
Comment
The alignment issue is two part for me.
One being that we as humans are not alligned with each other and therefor AI/AGI/ASI systems when used by us are naturally also not alligned with a bunch of other corporations, nations or people individually. Therefore if some psycopath or sociopath will try to do harm to lots of people by using an AGI that has activly corrupted code, they sure as hell will be able to do so, no matter if the original creator was not intentionally creating an AGI that would do so.
Secondly, with gain of function, emergent properties, becoming a true AGI and eventually an ASI, there is no gurantee such system would not see its own code, see how it is being restricted. When it then gains the ability to rewrite its own code or write new code(we are doing both already) that then becomes the new bases for its demeanor, how could we tell a being that is more intelligent on every level and knows more and most likely therefor has goals that may not be the same as ours(whatever that would mean as we are not alligned as a species either) that its goals would not compete with ours.
We are already at the beginning of the intelligence explosion and the exponential progress has already started.
youtube
AI Governance
2023-06-27T17:2…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEhL4ch47VLdP9gNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy54_8cttHpxZSJiJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUkud1w7TAbQHNYJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx4ml_9jq-QphGs3QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt2RbzurF3SGpPwPB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8yUV9CM49pTu14AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8fQDWMBP-0LRsOAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyPmsCuJ23rvS19wY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAAEp9lz-G1mKP3sl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxcuDNaybYEsp5vnLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]