Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To any software developer freshie, Any AI model will help you if you already kno…
ytc_UgxVd62vF…
G
I think this is an understanding and possibly a definition issue. AI can't choos…
ytc_UgwXxi_6X…
G
chatgpt arbitrarely assume that is talking to a human being. so to prove it has …
ytc_UgyYD6qU7…
G
I don't think we would progress to the point where robots take everyone jobs and…
ytc_UgxLpupc9…
G
I believe that we can't truly get rid of AI, nor should we. Once something exist…
ytc_UgzJdewsx…
G
They get your voice easily from your voice mail. It’s usually long enough for th…
ytc_UgzBd8L5Q…
G
now this was awful. the thingie "speaks" like your average consultant in pre-pa…
ytc_Ugzjkkt0s…
G
Please, don't create such things. I mean, investing in various things like that …
ytc_UgwS8qs17…
Comment
Chris Hrapsky is just creating a fearmongering example just to get more hits/likes on his video. That is unethical! It only gave you the illusion of "breaking the rules" based on the parameters you defined. You asked it to talk to you as "if" that were possible and based its answers on the parameters you defined. You asked it to act like a psycho so it told you psychotic things. Not that it can do ANY of those things itself. It merely pulls common ideas from data sets it has been trained on. So if it has read stories or articles about those subjects e.g. "What would the world be like if...". ChatGPT only repeats information it has been given. It does not come up with its own ideas, it cant implement them, and does not understand them.
youtube
AI Moral Status
2023-04-01T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxtuhHP9hGn8-3jsGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzx696XXoelzF12ZFJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzk6xGjQJo-zicVraJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyYlJ6dZeXPy4cmFI54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"},
{"id":"ytc_UgwThoMkTMAIpUmIcxp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyDKtv9-QxCohyne9p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugym3pFTvig5twoNhIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4rDHJxemKEsLSjcp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxZwBv-a9TTo5SEDDh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugy_6UJQT0JTHiHQ_mh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]