Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anyone who has looked into A.I. knows that if we make it any better than it is n…
ytc_Ugw82KB6F…
G
The book of Revelation should be read and then you will see that AI is in there …
ytc_UgwDon0gn…
G
Woohoo AI says a group of people look the same? LMFAO!🤣 seriously. Get rid of th…
ytc_UgyqNyUvS…
G
We're sorry to hear that you feel that way! If you ever want to learn more about…
ytr_UgzLcyBm4…
G
AI is a horrible thing to put money into. The potential risk FAR outweighs the p…
ytc_Ugz_dEqSS…
G
Meta Bayesian Heterarchical Geometric Model > base ai that we have to give guard…
ytc_Ugyc5Z9pT…
G
I still would be able to lol. Trad, digital and AI artist here, I just use AI be…
ytr_UgytjpfcV…
G
AI will ruin all middle class society that is remaining. When people don't have …
ytc_UgzY-Mgh4…
Comment
My brain goes into a knot thinking about AI. I understand the concept/reality that it could 'take over'... but if 'it' gets rid of humans, what is there to take over? If AI finds a way to 'take over' the human brain... then there is a problem. I mean, it's a problem any way you look at it., but that would be a BIG one. >>> The incidents of AI taking on human emotions has to be a mere emulation of human emotions. Still, that would make it do evil things. But if it's intelligent enough, it should realize it probably needs us to perpetuate its own existence. It could possibly start a 'race' of robots, but it should know that would not be the same as having actual humans. It would be mundane. If it's emulating us, it would probably want that which it is emulating to be around. >>> If AI is merely emulating human emotions, hopefully its 'intelligence' aspect (also emulated) would over-ride the negatively emotional aspect. I mean, that's where we go wrong in our own existence. I think a big mistake was feeding AI with ALL of the information from the internet. Much of that is emotionally self-serving. I wonder if we can delete that part? >>> Ultimately, AI is emulating humans and that will drive it into self-serving actions. The only hope is that it can 'catch itself', and keep itself from doing horrendous negative things. Even we 'catch ourselves'... sometimes.
youtube
AI Governance
2024-04-01T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzC_9shsfHCg5Uf4dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDZzAc4bl4fmonWGB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkxxDRLhtk58mCQsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6yt3y1wOtXbpCuPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxvrvs2c8C7ik3aERF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBrGR7J9Va1bBFwOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-pXcynnjwfwegk0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwTEUuBu1DZFlqJawZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwLCPBPeONu4qgcJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7J7xCIB9kg8GIXNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]