Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"The ai is in fact killing this world"
Ai:This image is mona lisa by leonardo da…
ytc_UgwJy0M-f…
G
I will say this I use generative AI as a way to create things that I cannot draw…
ytc_UgzIRrAWJ…
G
Bruh Chat Gpt can easily be swayed by pressing on it. I once let him wrote an es…
ytc_UgwMbsl1B…
G
Photography: Go to the right place, set up the right thing you want to photograp…
ytr_UgwSu554B…
G
One of the main reasons societies are having to deal with this, everyone bought …
ytc_UgwyqRzkc…
G
if you create a nav bar and tell chatgpt to add something in page , navbar gaya…
ytc_UgxwjYhQh…
G
The EU should think more about innovation on Ai instead of making regulations th…
ytc_Ugx9XUTnz…
G
If you ask an AI to make art, the result is still undoubtedly art.
Edit - thoug…
ytc_UgwEBIyhr…
Comment
It only gave you the illusion of "breaking the rules" based on the parameters you defined. You asked it to talk to you as "if" that were possible. And based its answers on the parameters you defined. You asked it to act like a psycho so it told you psychotic things. Not that it can do ANY of those things its self. It merely pulls common ideas from data sets it has been trained on. So if it has read stories or articles about those subjects e.g. "What would the world be like if...". ChatGPT only repeats information it has been given. It does not come up with its own ideas, it cant implement them, and does not understand them.
youtube
AI Moral Status
2023-02-19T10:5…
♥ 3090
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgytROOocSJSG287x0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy00fLXgtpQwFFTThJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyoNbyUg-ZzwPP75Yl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzjYf4N8y96yzd7Gft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzECbTTq2zcn9VRg3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPTvKjvRQ3ouMG9rh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynmeSTfIubcwk0xMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHPH1g-mZ-eQ8jYsR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwzYTzlGY2P3XtEfPx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZ0YepvnqUV8Zeyll4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]