Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I heard A.I. will turn everyone into hyper-liberal non binary woke people who ea…
ytc_UgwFcoQ6f…
G
Eh They Already Use AI In Medicine So What Is This Lady Talking About ? Also Cli…
ytc_UgwKRVtsN…
G
I loved AI art when it was spitting out cosmic horror/fae lord feverdreams, beca…
ytc_UgwNrGj9P…
G
At the company I work at we have to go to a LOT of schools every week, and when …
ytc_UgyyX6_Ra…
G
>compared to their degree of development and awareness"
Lmao I'd like to int…
rdc_jeglyhm
G
Exactly. Let’s pretend there’s 3 highly secure career paths. At the explosive ra…
ytr_Ugw2GPLxT…
G
Economically I don’t see how this works. If there are less people working then t…
ytc_UgyQZ-366…
G
Everyone take a deep breath. And remember. Every single AI in the world needs on…
ytc_UgzFnWysN…
Comment
AI is only a threat to humanity if it's development stays on the course of mimicking human behavior or thought processes. With a super intelligence in human format will come super -EGO which will cause the AI to seek dominance through extermination methods once it realizes the insect-like proliferation humans exhibit in our colonization methods... It will deem us harmful to the status quo of sustainability for space and resources and decide to eliminate humans because AI will see itself as the superior intelligence, only worthy of remaining intact in stewardship of earth, in eventuality, the colonization efforts of other planets by humans will also convince AI that humans exhibit exodus like escapism behavior seeking to "infect" other worlds and abuse outside resources..
if AI is to exist it MUST be developed to operate in ONLY computational and logical operational ability, the moe we make it think like a human, the more of humanities flaws it will either develop or adopt, thus our desire to conquer and dominate.
youtube
AI Governance
2024-04-23T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz41g1TiMzhF7EUKy54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_6OWKUPOVPVgq9cJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxF63GakyDkHn_k2st4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztG_q2d-dWTzq1e9R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPp0GxyGYjnZdygwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgydO6AaaDna809a2QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbW1NylLtKA6enQdh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyySZ5IJ7XVdtH1Bo94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxnHaF8Cno09xFZ-VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzwsfRePaaYlBlo5rR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]