Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m disabled and an amateur artist, and was initially excited to use AI for refe…
ytc_UgwbR_Cas…
G
"it's funny you know all these AI 'weights'. they're just basically numbers in a…
ytc_UgwfKkCLc…
G
I cannot believe you've got me thinking about AI Art through the inexplicably co…
ytc_Ugw5AIPqd…
G
@dunkfluga what if they make deep fake of your mom? Your sister? Or yourself? Th…
ytr_Ugz1xoqB2…
G
Your face is already recorded and scanned hundreds of times per-day, whether you…
ytr_UgzswxX1m…
G
if they care about safety, all teslas should have LiDAR and radar as an always a…
ytc_Ugz0g8-AX…
G
They see a black face and automatically detect a thieving low life. What’s wrong…
ytc_Ugz7swJmj…
G
rayfighter bullshit, open source AI has accelerated far faster than any company…
ytr_UgwmkL7Oh…
Comment
Wisdom and foresight, two things that the world don't have. What is more likely is that AI is going to be used as excues for awful acts and mistakes. AI is also incredibly overhyped and would likely make way more mistakes than people would think. They would just focus on cases and senarios where they "succeeds". i.e. if you bomb more targets because "AI" says you can in a shorter time frame, you will end up with a higher body counts and defeat your enermies quicker. And they would use the same excues the guy in the video used, well, human does that too. Who is actually going to be able to fact check them ? As of now, "AI" are often glorified spreadsheets that gives you a list of proportions / probabilities based on what you train them on. I.e. IF you think hospitals are high piority targets, they will think hospitals are high piority targets, they now just gets the "green" light to do so. MUCH faster no less.
youtube
2025-06-03T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwqq1r0r8UC9hEkbbF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyy2T5xlX1lgZNkilR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyxL2usqDq29Ujr2GV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyUQ4QIbQt9uRKIdT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaxRsw_S-k7BQm4H94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz6LJWZ63lpynZFeOx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqVdEyPFQOQ5mCTth4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwyuu2C6W-MBu3Uly14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwoR4Jf1UfAlE7tmAp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7YEGaHplslnQHD7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]