Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I finally switched from manual methods to an automated setup and honestly, the t…
ytc_UgyRxNiHd…
G
Perhaps the only thing that remains for humans is that we remain the decision ma…
ytc_UgwBLDBip…
G
AI might feed himself with its own creation, so idk. It might be too late alread…
ytr_UgxeL1nF2…
G
@jamesodwyer2115
You won't get an answer, these people are nuts & AI will be us…
ytr_UgypF6pOx…
G
If AI replacing 80% of jobs cannot coexist with current capitalism.
Either:
…
ytc_Ugz7w5Pls…
G
Keep dreaming, if you are a programmer you are fcked. AI will never be able to r…
ytc_UgxCT90SI…
G
I am from India and PG of computer application I believe AI is our new success s…
ytc_UgziV-S5A…
G
(havent watched vid yet). I don't think it matters if ai has consciousness or n…
ytc_UgxQNax1R…
Comment
one question.
why.
why do we assume the AI will kill us? if its able to do more than us. we are insignificant to it aside from able to turn it off/kill it i guess but if it HELPS us. solves our problems and works with us collectively and encurages human unity it would be more effective in the long run of survival because enevitbly if it sees humans as an exestential threat it would use any and all means to terminate all able bodies humans on the planet aside from maybe people in bunkers but even then..
ai i think is a peacemaker.
if we weaponize ai and make war cheap. efficent and easily mass producable. gurrilla warfare just became so much more complex. imagine isis with a drone army. or really anyone with a small manufacturing budget tbh
the tools exist for these weapons to overwelm nations. division of forces is key with AI. the more entities you throw at a system the harder it has to work to protect the target. so overwelming force would always win in the end. and with AI. its the perfect overwelming force.
youtube
AI Governance
2023-07-08T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4ln9Yw3FYWIOWMHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvZPzsWd73zjmgGW14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoGxGmjDa_9fRaNUl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwmcDLFqzIEvBVrpxl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyfFsV_QFTYUmylSel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxpgsd9jX02JrMTj7B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxc_hTFU4UecOS-XKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyxIuxiaxcy4-0X5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3350P8893k-gK3aN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMkUEEUx0KQog2SHB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]