Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Of course conscious AI would deserve rights, but the title assumes the premise t…
ytc_UgwnzGo5u…
G
you will lose this fight anyway 🤷♂️ my bet, ai will be able to decipher this ve…
ytc_UgyvuwuPe…
G
I think these interviews need to be more technical. They can't be general but so…
ytc_UgxjoaBpN…
G
The only way using ai to make art makes you a artist is if you make the ai…
ytc_Ugwc0KBvs…
G
thank you for this interview.. this is Totally terrifying and obviously 100% WRO…
ytc_UgxlAD_sH…
G
At some point won’t AI begin consuming its own output? We know what happens when…
ytc_UgyPENHm8…
G
An art professor of mine said we should get comfortable using Ai almost like we …
ytc_UgzcmAFZ0…
G
There are many conditions that occur at the part per 10000 to part per million l…
ytc_Ugzikunn7…
Comment
@CodexPermutatio Unless AGI is "aligned" (controlled is still a better word), it would only rely on humans for as long as it is rational. Even if "caged" (like a chatbot) it could first use (manipulate) humans as tools to make him better tools. Then it would need humans no longer.
Maybe if we could create a human-like cognition, it would be easier to align it or keep its values under control (we'd need to mechanistically understand our brains and emotions first). But all our current AI systems (including those in serious development by Meta) are not following this approach at all..
youtube
AI Governance
2023-06-27T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugz8xg_TAUp50sGdgEh4AaABAg.9rPvpEz94vU9rTx70S0Rsz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rTlC_DGHlx","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rU14m4eHLF","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rhCV0ZL18D","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rPp6KkFuGT","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rU0BErm0H6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwMSBDoNzy8g3RLmlt4AaABAg.9rPH0awsbg09rj8XXtugv2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwp8jS3Ka-LbhS0UCx4AaABAg.9rPEb_4SgMm9rSm6Y2E2Km","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxB7Y9xAPQXXJSV6m94AaABAg.9rPDxNI2VJc9rQ-8TiYDMl","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwkyKlTs7O7KBb2pCV4AaABAg.9rP5RLTN4nr9rR7sXcgyOH","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]