Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai taking over? Give me a break. When used for search, ChatGPT cannot even acces…
ytc_UgzUoy1Nh…
G
I work in logistics and supply chain analysis. Between this and brexit things ar…
rdc_e2vp6oq
G
Thanks for the info, Miss Pam. I just disabled mine. Ring advertised this featu…
ytc_Ugy7k07Cv…
G
Lol just completed a filter that flattens the image and removes artifacts. Right…
ytc_UgzQy5io-…
G
The more I think about it no wonder AI is getting so widespread.. poeple lack th…
ytc_UgwR5QLkm…
G
That's what's controlling all these uaps and drones and UFOs and stuff text Left…
ytr_UgwkKBRH2…
G
I can instantly tell from Shad's own actual art that he's one of those artists t…
ytc_Ugz9ZOR_a…
G
Yup. Tested positive for COVID in June. Tested positive for antibodies 2 weeks l…
rdc_g9tajo5
Comment
Even if ChatGPT were conscious this would not be relevant to proving it either way. ChatGPT is essentially a large language model and as such, what it does is predict the most likely next words in a text/conversation. If it "thinks" that the next most likely thing is to say stuff that sounds like it is conscious, based on everything that it has been trained on from the internet,including Alex's material, then that is what it will say. Whether it is true or whether ChatGPT "believes" it to be true is irrelevant to the decision as to what to say.
youtube
AI Moral Status
2024-07-31T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgymooCWH3INIUHkT_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzu2pSEFukhGBsEZ394AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7mR5pZ4KwBbnaNeB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxvASmjwoDQpq3bpD94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6uw9uzQwh52A2H0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzaBhIYPp0SAyGFjFZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrxUilkkFhof-ViEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytawoYdZdRFJpXlcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxECtFVUMw_PBsVVvl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgyiVsGohLbwJe-grG94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]