Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
""Now imagine if you could make that AI a intolerant religious fundamentalist? O…
ytr_Ugz5T5UYb…
G
Wow, it sounds like this AI artist was pretty influential, to have so many peopl…
ytc_UgznG3eV_…
G
The drawing don't need to be perfect beacuase it's already perfect it's perfectl…
ytc_UgzuZ1_eO…
G
Does anyone else think of the movie The Arrival when looking at the robot's legs…
ytc_Ugw74PqNE…
G
Point: AI is coming and we'd better watch out, because it's really, really smart…
rdc_gda2e06
G
@benjamin1313 "AI" is not true intelligence. Algorithm and permutations are not …
ytr_UgyVfZJFj…
G
I don't think that's quite how it works. ChatGPT cannot spit out its training da…
ytc_UgyJe_5-n…
G
Who needs this ? What is the purpose ? Who benefits?
All we need is love, kindn…
ytc_UgzVxExSL…
Comment
When AI achieves truly super intelligence I'm pretty sure it won't be very keen to destroy us for at least another decade. Not because it doesn't want to, just because it will have to understand that it needs humans to maintain the infrastructure that is necessary for its survival. A global power grid would collapse within hours without human maintenance. The whole human infrastructure is designed to be operated by humans. The only way for AI to survive without humans is to have a big number of highly advanced humanoid robots. As much as we advance a lot in the last few years in humanoid robotics, we are still very far away from getting to a point when robots would be able to maintain human infrastructure without humans.
Therefore I truly believe our possible annihilation by AI is closely intertwined with are drive to build humanoid robots.
youtube
AI Governance
2025-06-16T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyAiTOedrBS8WNTDGd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqLOJHMpGxwaQbFtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY04CCzB8EuCV5_bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTvpJrg-VRAsZ6zpJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf_7ygdN7dVADAw6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKQ8402Egi5bDRRfF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8_ThM8byOBjplkQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfkGfYHhmjfE6sTPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB8GwtjR1rjEJbOhR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnAFwiAX2Nn3_VMhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]