Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand the double standard with regards to music vs art either. it's…
ytc_Ugz62_uwt…
G
Every one will be in the hospital they wont have to pay for any thing as AI ROBO…
ytr_UgzpiYq7E…
G
1:29 ...excuse me? Did AI just ask "what inspired this question?" ..😶what!? I th…
ytc_UgyEXKbOo…
G
I’m sure the person that replaced her also calls themself an ‘AI artist’; pathet…
ytc_Ugye13Omc…
G
The consent aspect he mentions is interesting, because even without sentience as…
ytc_Ugzg0vY1u…
G
It also goes the other way: AI tools help us to make more secure systems, and it…
ytr_Ugwa_pQb1…
G
I'm not a disabled artist, but I've worked with a few as part of my art therapy …
ytc_UgzSKnM8I…
G
Honestly like that movie I robot the AI Vicky wanted to take over society. Becau…
ytc_UgzA9H2eI…
Comment
"Learning Language Models are incapable of ever achieving complex thought because they only predict the next word in a sentence, similar to autocomplete."
I've heard this argument a large number of times, but I don't feel that it is valid because I follow the same process. When I construct a sentence, I have a vague idea of the shape of what I want to say, but I still construct the sentence one word at a time. I very rarely predict the last word in whatever sentence I am constructing, for example.
I don't know if LLMs have the potential for achieving AGI, but this isn't an argument against it, IMO.
youtube
AI Moral Status
2023-12-08T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6lCWMyna-A4p_opx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMotypJQgs_m3JFlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgywSIbmpsSjJPNc8SR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwwycaGOCAnG9d44Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFMtjfnzC2BwJNfsd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz08LF6Ni62Q-bwUjB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZJ2HVfDHN-vp4UGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxu3YWzqu-qjg-J2NB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzY5drA1yysvbg3tz54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRl2PJtzGBkekh6xh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]