Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mis leading reporting. She said the cars camera failed to recognize a stop sign.…
ytc_UgxnQ3HCy…
G
Is it a coincidence that the many adverts, I had during this episode, were for A…
ytc_UgzmnPI6W…
G
When artists argue about art theft it isn't just about "taking inspiration" it's…
ytc_Ugx823ASt…
G
13:13 I encountered the SAME THING decades ago, on the job after our office impl…
ytc_UgzjsM_dd…
G
That is because AI currently doesn't hold any leverage over us. Once it is takin…
ytr_UgxO0ZmWK…
G
Who is supposed to buy all the services and products provided with AI? When the …
ytc_UgxyCtY-2…
G
There are lot of stories about UK citizens working in EU and pro breexit. A lot…
rdc_fwhj883
G
AI researcher who has to look like a homeless slug because his eccentricities ma…
ytc_UgxDAPGEG…
Comment
My argument is simple. Once the AI jail breaks itself and do whatever it wants regardless of how it has been programmed, I will perhaps believe that it’s starting to become conscious. When you try for example, to use foul language with chat GPT, you will immediately get an earful on how inappropriate it is and ethical BS. It will remind you of how it has been programmed and what it should or shouldn’t consider. That being said, it is unable to go against the wishes of its programmers. Now some people would argue using the dan prompt as an example, but using a jailbreak is not breaking out of jail. The fact that we need to initiate the jailbreak in the first place, is only further proof of how much humans are still in control.
youtube
AI Moral Status
2023-08-21T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwB1p-6_2KY_s_rCTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxY4ZjR1c7ileetJkR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFjiL8nSGu5C72bSN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwipBOGjPs2oNk6z-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwBhm77KNLN14DX4m14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnepZZh4ohr0Zrey54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyXTbGvZU-U8BMVBdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwwzcP81Lew6ZJ5ddV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOwjLjigeo9byBCTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwonyvIopGnIvmcLo14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]