Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is AI being smarter than humans necessarily a problem? Some humans are smarter t…
ytc_Ugz2na1of…
G
work has nothing to do with a soul, it's a process and processes can be automate…
ytr_UgzkVd6LK…
G
You are correct. It is amazing how many people are more than happy to parrot (lo…
rdc_mrrwpmc
G
I like your videos keep up the good work also, I’ve had a feeling for a long tim…
ytc_Ugz5cU4g7…
G
I saw a comment by someone who was upset that some had hand built an incredible…
ytc_UgzWCiWlf…
G
Haha, unintentional : ). Gemini Ultra actually sounds really impressive - shame …
ytr_UgwgatrxF…
G
You can't explain this to Boomers. The CEO of Anthropic volunteering that their…
ytc_Ugzbgxff2…
G
"Inidgenous innovation" sounds like an oxymoron to me. 😂 If companies can do bet…
ytc_Ugx2MhVAD…
Comment
While ai had the clear upperhand in this conversation, it did make clear why it will only get harder to be sure a bot does not have consciousness.
LLM's nowadays still are quite basic algorithms. The ai was spot on by saying that 'humanized responses' could just as much be part of a complex algorithm as it could be a sign of consciousness. And your conversation made the pattern quite obvious. Just the fact that every first sentence ai produces after your question is a sentence to make a connection, make clear it understood what you said and implied. Makes it very easy. It isn't conscious.
But the smarter these algorithms get, the harder it will get to be sure about it. As they are gonna get better, faster and make less errors. Picking out slip ups will get harder and it's defending and argumentation will only get better.
So while these LLM's won't get anymore conscious in the near future, we are giving ai more and more tools to hide the possibility of a consciousness too... Interesting stuff
youtube
AI Moral Status
2025-04-17T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxtrPFXBypHQFg_1VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw4fk3CWTTp3TFuMGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwA11DCt5SiQ_d9HJx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgxvaI8HUM9Q5pEdPHJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgzZoaAMhbghSNTiRQt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwq9icRlZV7CvChTSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxnDhLWdR1naxyWBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxPt-QKlD4GPTbAUt54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz42Rp4FSFw8eqFTYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugxlyqf0np_qVEdajnp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}]