Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I predict humans will kill the creators of AI, its a mater of survival after all…
ytc_UgxRIxmgu…
G
Google likely has more training data and more computing resources than open ai.
…
rdc_jpmaqjq
G
Just stop using technology. The question is, why can't you? Because you are depe…
ytc_Ugw5F3QBu…
G
I showed one of my friends my are that I spent hours on and then they said that …
ytc_Ugxvxsg_P…
G
I use AI art. I don't want to "do" art, but sometimes I need art for something. …
ytr_UgxRhSWF9…
G
" unlike AI art which directly benefit large corporations" which corporations? M…
ytr_Ugz8BfMRz…
G
@Ven_isCool
being an artist and accessing art are two very different things, A…
ytr_UgwLxhqPI…
G
So this mat be another clue to my theory what if there is a secret developer try…
ytc_UgwxqOWZW…
Comment
I don't fear AI.
I fear people.
The people who architect the AI,
and the people who collectively make up the culture that consumes the AI.
AI is, at its heart, just an algorithm that is trained to automate human processing based on training data from which it derives a heuristic to transform said data from one form to another (language-to-language, language-to-images, language-to-video, etc.). That is all it is. It is, by itself, nothing to fear.
What we *should* be afraid of, has always been the same: People. The people who are building these systems in such a way that their particular heuristic is automated, with its bias and slant; and that is only a fear we should fear based on the population at large which delegates its agency to such systems.
In my opinion, these conversations are slanted towards the wrong angle: Building these systems in a "safe" way? By whose standard? The government's? As if they, or the systems by which they arise to power, are somehow immune to the corrupting influences of human nature? I mean, are you *serious*?
AI is just automated human nature.
It is human nature we should fear.
youtube
AI Governance
2025-12-08T04:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwxcK7ICuLPqGOPRUp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9V2yKHkQGGVzuLYR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbOWSxNAxVe9qSFGh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJR41fL25Fj9UoXAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1uDYbSZOYsGTEJnJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTfdKoVkKIbjM-bQd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw83qLGqfXQND9z6DN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAe5VirSFmYqhmnz54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzhElowQoh3DW4wlJR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2zUzx3yQPK-eK4Kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]