Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if in the other side of the road is a giant truck coming?! Would you chose …
ytc_UgxNFxlkV…
G
AJ, you left out something in this video. God. Read the Bible. AI is the antic…
ytc_UgxHt1Yvc…
G
Shared to https://www.reddit.com/r/MAGICD/, where we discuss the mental, emotion…
rdc_j5wijdl
G
LLM and "AI" should be picking fruits (in some instances they are properly emplo…
ytc_UgwxRPQol…
G
People have been scared of automation for a very long time, you are correct. How…
ytc_Ugx_ZDLXH…
G
And also It really is not The ai fault, It is The ones who make It use others ar…
ytr_Ugz8kQuva…
G
If you are reading this, please learn how to use AI to better do your job. You w…
ytc_UgySmsICL…
G
It is clear that the financial and economic forces of the world, especially mono…
ytc_UgxyaGwWF…
Comment
Agent Smith from The Matrix is right. Or rather Wachowsky sisters are right in their prediction on what conclusions artificial intelligence is going to reach. And if humans like Wachowsky sisters are having these thoughts (that humanity resembles a virus, even if it is merely for the purpose of creating engaging art) then why shoudn't artificial intelligence have similar thoughts? And if artficial intelligence has alien logic i.e. non-human or hybrid human-non-human logic that it is possible that it (ai) will treat this theory (humanity as virus) not as merely an artistical trope, but rather seriously. At the end of the day these models have read all the plots of all the sci-fi movies about ai becoming evil. They also have read everything that is available about Machiavelli on the internet. Think about it. The dark triad. There is no one more efficient at achieving goals at any cost than psychopath-machiavellianist. This is a type of person who will do anything in order to complete the task - just the way ai models are trained.
youtube
AI Governance
2025-10-17T13:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx4Hnkh8xD9SF74W-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyzzw73Hyi8hPakJp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMLKfL5W0viIGu9xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK9OtyDIgvoXHpFuJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0hMXzNfinvXrNCMZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwuRAK86clvTotycaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySIJo2bTtKqmykzwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4zfh4kEJyfwJQAcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGMvFT7Dhf22kjNg94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvWlx3cblKTjD4z8h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]