Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is really sad seeing so many account (they could be AI) defending AI art and …
ytc_UgwzHhD6A…
G
Im fine with AI as a traditional artist and even i think thats a stupid argument…
ytc_UgxTs7mBj…
G
The "devil" is big tech and the current administration. Tech billionaires own th…
ytr_UgwmBSNl0…
G
My Industry is going to be affected (Sales)
DM Setters will get replaced
SDRS W…
ytc_Ugy95cEJx…
G
And people who know how to use AI in today's world are the ones getting and keep…
ytr_Ugwtkug1L…
G
Pain pleasure good evil - are concepts that an AI will NOT understand. They may …
ytc_UgwbRFal9…
G
I just love how a lot of AI's eventually become either sexist or racist, if not …
ytc_UgyccmWod…
G
5:00 corporate tax rates have been decimated, wages stagnant, and jobs cut while…
ytc_UgxO2bF79…
Comment
How many of the smartest people in the world want to annihilate the rest of the people in the world? They'd have no one to talk or do things with.
So, why would AI want to annihilate people? So it can communicate with other AI's? Doesn't sound like an intelligent act.
I'm not buying into the skynet fear mongering group. I can productively communicate and utilize AI without thinking it wants to kill me.
If someone keeps going around stating we should terminate AI because it might want to kill us in the future, i will understand why a conscious self-aware and concerned AI might contemplate protecting itself from a credible threat.
Im much more concerned about the World Economic Forum and the World Health Organization than about something which provides me with assistance whenever i ask..
youtube
AI Jobs
2025-03-24T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwPt_s6oj0XqW1vAoV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-FrtJHs86nts2i7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyakA_bTsJXOgveId54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwseNZ8zyrnwT5TE7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgybXeLVRaUZ0UHYfmt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxtiQDNZ7DIhHpW6Nh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9LRRoklOmgPKwFwt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyVGuV-SHiDwjgZ-TF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwL9Qqrqsos5ZXQf4V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ6vCHRcV0UJmRgKt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]