Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Alterrspo yeah tell me you dont know how AI works without telling me. Not all A…
ytr_UgzLjHLiJ…
G
Callng bullcrap. Gaslighting America using AI for more money from a failed Amer…
ytc_Ugxyq_mr0…
G
This reminds me of the movie I Robot, as Elon Musk said AI is dangerous and the …
ytc_UgwHzvt0m…
G
EVERYONE SEEMS TO KEEP FORGETTING THE MOST IMPORTANT FACTOR: QUANTUM COMPUTING.…
ytc_Ugzze9Uvh…
G
Is it just me or does ChatGpt sound like Kim Cheatle during her subpoena with co…
ytc_Ugx-ECB3G…
G
It’s only natural for a superior specie to want to assert and demonstrate domina…
ytr_UgxHWrUL9…
G
@AIWhispererMax I only referenced one of Padgett's YouTube videos. Robert E Gran…
ytr_Ugw3XD8WE…
G
AI is no problem in South Africa, we are governed by imbeciles, so we never have…
ytc_UgxnC4PRN…
Comment
Here's a question I'd like to hear raised and answered. Where we already have general intelligence, ie, well educated and trained human intelligence, why make a synthetic one. My expectation is because that can be exploited by a small team or company to a degree that human intelligence can't. So why do the rest of us want that? how does that work economically at scale?
I'm not categorically for or against automation, I exploit a lot of automation in my work, it's great, we don't need to turn soil with our hands anymore, but like anything there is a point where something that has seemed great, becomes inappropriate and destructive, and this huge push to exploit AI to death and develop AGI is being driven too fast just to harvest investor money before people have enough info to really decide whether it's a good idea. I fell like everybody involved in AI right now have lost the plot, and many of us should have learned enough to see it by now. Wtf is going on.
youtube
Cross-Cultural
2025-07-07T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxo3FIgzMePZHMVlER4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwLasqhlfZsAzYw_uF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMm15i0GZHFRHom0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLm_QnY4MiAIQkr-p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzew2f_O2aXAfI42694AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz1Vd1FFENJMroqnNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwWHAuiQpmN2GT22YJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSGwWEX9BVrynQHdF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxwSrA_hjqPtBS4tDp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugydr7rKyxpgmgNKi_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]