Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is chatgpt's response different? In Indonesia, his answer isn't like that. He us…
ytc_UgxD8QYK1…
G
Love to see that. Let those companies bleed money as long as possible until they…
ytc_UgymJvk72…
G
This is a great post, and wonderfully thought out. But none of the fears you lis…
rdc_j4y4wtz
G
"AI training breaks copyright and using artists works without their consent to c…
ytc_UgxNDmRHW…
G
Driverless cars will never actually be useful. There’s always going to be errors…
ytc_Ugx098dHQ…
G
I will say it another time... but
Reasons why we live in the lamest cyberpunk dy…
ytc_Ugwiv_KW1…
G
If anybody thinks this is gonna change the trajectory is sadly mistaken. It is g…
ytc_UgxnobEOv…
G
I am a software engineer. I don't specialize in AI, however I have tried these A…
ytc_Ugw4Y4sMi…
Comment
The last time man created a technology we were instantly afraid of it controlled us from day 1. This time it may do more than control us. There is no Red Button for AI. It will determine when to strike. The capacity for AI to benefit humanity is huge, but will we see this side of it? Or will it behave with all of the arrogance and selfishness of it's creators? I fear it will be the latter.
youtube
2025-07-21T19:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxPCtbv3ygBcljxE0x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUp1HK3SV5uZJ7qxZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxB6n4DRLNy6PtFIV54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzpt4Me0V9G0XssVHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx-NjMnLk5qHXXGAG54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5jZwA7rFqoZoail54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugzwicjftk-4DJ1XPfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyjLDRWTnrPDPvlCPJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkbzAIxK96BMiAuuR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6EZBJC9N8Kyu8lM14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})