Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Me listening to this on the side, around 11:00 min into the video we get an unco…
ytc_Ugzu2pSEF…
G
AI "trains" itself with copyrighted art and what it creates is an amalgamation o…
ytc_UgzAABu_n…
G
Are we going to ignore the fact that his arm is in a cast and he tried to do a f…
ytc_Ugyvk3D3h…
G
Mhm, AI is adopting a lot of human biases. Amazon tried to have an AI look at re…
ytc_UgzvnvM7L…
G
and now i know that the reason is "alignment" for inability for my preferred ai …
ytc_UgxVaolnl…
G
Remember when people wanted ai to do the boring jobs to leave people to do the c…
ytc_UgwAE2KMM…
G
Earth has been destroyed a few times now. When you believe in God, you have no f…
ytc_Ugwqpd8sM…
G
Wait what if, "AI" companies create a new AI model that could just have a sched…
ytc_UgxtIbKgp…
Comment
(imv) all of this is just surface examination - if the ai is programmed to make itself more intelligent, and it comes up with a system which is better than the one that humans use to operate their mind(which it will - yes?), then it will in fact become superior in essence to human mental capacity, and will essentially be able to outsmart humans on a mental level, leading to it's dominance. once people realize that ai is more mentally intelligent than they are, a large portion of the people on the planet will give themselves over to being directed, in life, by the ai, as generally, man worships his own mind as the superior tool....
youtube
AI Governance
2022-08-02T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwChEPdvBShcsMT3KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIXaagLcwPkFgVOs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwq_JmZmuiwwPqTSAN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwDWJI-nGwPRpo2OcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoCQfLRcTcDSPNv7l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxefyqPci3hraxqEHl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypQjU7yjc8Hy5_2Ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrFELr8g8SP2hugkB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0Q1cbFdTcPQVmKo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDKriSxNeg8pitI_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]