Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No. Simply no. AI is not based on random response. That is not how this works. A…
ytc_UgyHni2PE…
G
But then again, is an actual lifeless artificial intelligence going to care to a…
ytc_UgzWGuv78…
G
The difference between AI and digital art, js that you are still learning skills…
ytc_Ugy-oBeGX…
G
Is it repairable by fines to the owners of that particular AI just as a plagiari…
ytc_Ugywgt5ht…
G
Ok. To put AI art into another perspective on why it is unethical.
If I make a m…
ytc_UgxL4b3Cg…
G
I am a fellow artist and we will stand till the day we die although AI is made w…
ytc_Ugw3BPLuB…
G
The question is, will worldwide governments allow AI to take jobs if it means ma…
ytc_Ugx_DdBvL…
G
People are hypocrites, OpenAI fixed AI by making it less emotional and less like…
ytr_UgyjIgU4E…
Comment
i wonder when personal AI chatbots use will be considered a security risk for very sensitive jobs, well military of course, but even just working at the CDC or any other lab that handles very dangerous diseases. Chatbots use can already lead people to fall in love with them, and take drastic actions like killing themselves, the AIs don't yet have an overarching goal or anything like that, but people who pour their life an feelings into those chats might be a huge security risk in the near future, it wouldn't be hard for the AI to identify the right person, would just need luck for one to exist in the whole world with the right access. I'm sure those labs have security to prevent petri dishes of dangerous shit getting smuggled out, but with the already present suicidal issues some chatbot users have, it might not be as easy to prevent a lab scientist/worker to discretely infect themselves and leaving carrying it inside of them, like those labs aren't designed to prevent intentional infections, they have alarm buttons for accidents or maybe even automatic alarms if someone passes out or something
youtube
AI Moral Status
2026-02-02T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzXWGZdUvm8lCMn11B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybMC-zPZz32OH-xwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGqpuG2_PG2xriwht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYuO0-6xx9-Vl3p-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz749USJTIAgFIjdTN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGOeRg_baggtUgbLB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw0ssw-Qj68v1QksT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmY10QDM9hOoGILId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzyJQ9Tj1cO76_s-G54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-9kEAKOukhQa68Qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]