Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"3D structure of protein will help scientists understand diseases and develop ne…
ytc_UgwEMYlDq…
G
I feel like humans taking inspiration requires them to put effort into honing a …
ytc_Ugzejmysr…
G
I'm glad that i'm from Eastern-Europe cuz i know that there even in 50 yrs jobs …
ytc_Ugw3ONmWH…
G
Well said Sam, if you choose to give your art to a AI collective, you should be …
ytc_UgyquGfv4…
G
Here is popular thought on this in Korea: the government spent more than $20 mil…
rdc_cjot3ca
G
This would be a great time for anyone to binge watch the series "Person of Inter…
rdc_icgcj3r
G
They rushed for AI for consumer business. It is not yet at that level. May be in…
ytc_UgzdQXmtz…
G
AI can't take your jobs. I remember in India when people were against computers …
ytc_Ugxk_G52l…
Comment
@neorock6135 he does not understand AI very well, nor the inevitable progression it will make. This failure on his part has led him to preaching a path people should follow which would be EXTREMELY bad for people to follow.
If he manages to influence the USA and/or Europe to follow his path it would 100% result in global nuclear war which would collapse human civilization back into the dark ages. Then, when human civilization recovers and we come back to this same stage in human evolution, he would have a repeat global nuclear war to keep knocking humanity back into the dark ages until eventually the human race becomes extinct. He knows this and in his opinion this would give humans a longer period of time to live as a race than allowing Artificial General Super Intelligence with Personality (AGSIP) individuals to be created by humans, because he believes that the moment an AGSIP individual is created that is the 100% end of humanity.
Besides the fact he is wrong, following his advice is about as bad as a possible future one could pick for humanity to follow.
youtube
AI Governance
2024-12-25T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx-1YyJ1WEmPHZqv454AaABAg.ACGVETIsRr1ACRPphuwGmf","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwQoPdEBfrvHfAEKrx4AaABAg.ACGSy30qfxkADCddlB9LDp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzNSeFjLvOFK41zHuB4AaABAg.ACG73SS3EOhACQBU8h78-4","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzNSeFjLvOFK41zHuB4AaABAg.ACG73SS3EOhACTA6VXuEf0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgzNSeFjLvOFK41zHuB4AaABAg.ACG73SS3EOhACU8Kd3Hz1F","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzNSeFjLvOFK41zHuB4AaABAg.ACG73SS3EOhACUEuw0mkFQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxIULAE6SIwBx-rDLx4AaABAg.ARRkOzwjusfAUH7XcHDMHb","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugwi3Gu9MeUEvSKbaDZ4AaABAg.ARPWLjoxGdjARW8biul1a5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgxYY2DONTX-QoDY3pl4AaABAg.ARPB1EygcGkARPeEOwYHnl","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgwFqxhfI1tkDHy9Xjd4AaABAg.ARP4WJfYbjCARPw0FwwimW","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]