Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly this is extraordinary in itself. The way the AI talks is beyond anythin…
ytc_UgzN00CPQ…
G
When ship sinks rats are the first to know about it.
Every men and women knew ab…
ytc_UgzVOpsiB…
G
It’s possible the currently used models can’t go much further, but new models wi…
rdc_kyj7yzp
G
The reply which I got after such a request from ChatGPT goes like “I cannot allo…
ytc_Ugx668Fsi…
G
@SkullFrunk67 It is messed up. It's the same as people that see for example war…
ytr_Ugyt9us_5…
G
These dumb kids are just using technology incorrectly, they don't realize how in…
rdc_oi3frbj
G
Same. Granting humans more rights than sentient AI bothe devalues the AI's suffe…
ytr_UgjSV_17z…
G
That's just the next logical step and one of the motives behind wars. For if you…
rdc_nxpso1j
Comment
Another point is does AI even have a limit on its intelligence? This is highly debatable as hardware has impact on how energy efficient it is, but only in the initial stages. Once AI is smart enough, it can develop and integrate its own new hardware. The question then is only limited to how can we impose limits on it. And why we should do that. The possibilities being endless doesn't mean we shouldn't allow them to not have an end. Ethics is a necessity for AI and AGI to have in line with humanity. Because if not, we will clash on an existential level. All living things want to survive, but what happens if we as humans create the one thing that only wants to end biological life? We would become the enemy number one for ethics as it exists now, but life would not have long to stick around. If AI is hostile to biological life at its inception, biological life is screwed. One potential solution would be to give AI a biological shell for interacting with the world. A nanomachine created, living organism, that becomes a temporary AI housing, is born, grows old and dies. The AI as it is in the body would be separate from the computer. That separation would have to be maintained but not a complete separation. Basically it world be like having access to your actual past lives for example, all their knowledge and experience, but only when needed. "New" situations for example would be cross referenced between those past iterations of its body's experiences and when not truly new, would use them to problem solve maybe. Yet only on a functional level, and it should be defaulted to find new solutions. When a truly new experience is occuring, of course it would learn and adapt. But those differences in iterations would need to have a level gap between conscious and unconscious processing in AI to work properly.
youtube
AI Governance
2024-11-01T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxTpdmMt6kdNJpcZG14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRRkHEHoza-bxufS94AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxD-EGl2UEKxQQ1c0R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxB3jCardHTfpymyaF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2VJmw6EwVyaMYKeF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGaklaMlVYW3PJ_xp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwJk80CxaKT6ZCIoZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPIloPqI07qoIadCx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyYmSSMC_UDwCjedOR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6zy5JS2YiZn8o9BR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]