Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots and Computers are "not intelligent". They are dependent on computer progr…
ytc_UgjvFmfPl…
G
I am deeply pessimistic about the implementation of AI into our deeply corrupt, …
ytc_Ugyy3asoR…
G
Hit the safest vehicle. If you are in a self driving car if it crashes into anot…
ytc_UgjHJF2WY…
G
ChatGPT made by Christians.
Christians- who's the bad guy? Says the name of the…
ytc_UgycbIPhj…
G
30:16 the whole thing they did with proteins and AI was wild to hear about…
ytc_UgzyYBp-8…
G
i hate ai
ii hate ai hate ai
i hate ai
i hate ai
i hate ai i hate ai
i hat…
ytc_Ugy2pKY45…
G
We appreciate your feedback. The video content focused on discussing the meaning…
ytr_UgzANnXTj…
G
Yeah I guess that's fair? My main thing is that besides it being bad practice, A…
ytr_UgwH9fkp7…
Comment
I haven’t watched the video yet, but as an early AI programmer dating back to the early 90s, I will say with 1000% certainty that AI will not destroy humanity, if humanity doesn’t trust it too much for tasks which requires sentience.
Here’s why, no matter what anybody tells you, AI cannot create anything, it’s only capable of doing what someone programmed it to be able to do.
Example, put two babies in a crib. On their own, they will learn to communicate and survive together.
Put two computers next to each other without any programming in them, forever they will sit there and do nothing.
There’s your proof.
EDIT AFTER WATCHING THE VIDEO: none of those AI models would’ve compromised human safety if the end goal included not to endanger humans or commit what are considered to be immoral acts.
What that means is, all of those scenarios were included in sub categories, not as a main line and goal.
So again, AI is no threat so long as the goals that are programmed into it include not harming human beings, or destroying the ecology, etc…
youtube
AI Governance
2025-08-26T16:2…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzZulSU8a4S5Q6HnI54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAxDcMebRzoUGITKB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrDzKcwd9_Rp3cePN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxun3SgG5T-M7mA8MZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzQjmuD0xdj1kWe05R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]