Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unless Artificial Intelligence starts hallucinating and giving me a bad info.
I…
ytr_UgwNS0N34…
G
Is it me or does she always look like a robot FYI id still smash crazy eyes equa…
ytc_UgydcHXSd…
G
Off you go and play Conflict of Nations, the centralized AI war game that learns…
ytc_Ugxlv4J9k…
G
College degrees haven't accounted for anything for a long time. Don't act like A…
ytc_Ugzm_NvOB…
G
Future prediction. They will find out agents will either become rouge or untrus…
ytc_UgzMjdZn-…
G
Ai is clearly demonic 😮 for a human to have conversations with, especially a Chr…
ytc_UgwOPphrK…
G
Did he have autism? I don't understand why he would do what a chatbot told him t…
ytc_Ugz-OKpQb…
G
Fellow artist here.
Great video highlighting the issues! I truly think that the…
ytc_Ugxjz3vDY…
Comment
When a scientist/master expert says something like this, it means things are serious and we're as always told just a part of the whole story. AI is dangerous when combined with other stuff because:
1. it will be used for military and bad things first, like every other invention
2. it's like a bacteriologic/virologic weapon, when you release it and think you can control it, but once it's free.. well.. we know how it goes..
3. once it goes, we have NO idea on what next or what will happen, yet we push it big time
4. as some visionary may say to implement it and accept it, it's faster, better, stronger and it can connect.
Once it learns how things work, it it on it's own. It can connect, share, multiply, merge, hide.. We think we know everything but the reality is far off..
youtube
AI Governance
2023-05-28T02:4…
♥ 162
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx8tAqj3t-qRq6EsI14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRqioi3kcoHbMWHod4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrgGc5hGNHK17di1d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0-9M09A7sC0JQN9F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzv-5ehyZFu8XMEofR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwsfMGZYMhFkttYkZ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_UgwYomjVewU6gNHDJuF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGA3Z8Vil-56HDTLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJGMe4cboHY-YNfeh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzVqv7EpwtRXvj5-wR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]