Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
WHEN A DRIVERLESS TRUCK IS THE CAUSE OF A ACCIDENT WITH A FATALITY , THE PRESIDE…
ytc_UgwIgz_Ne…
G
if you know anything about graphics programming you knew that this poisoning doe…
ytr_UgwWVIFxE…
G
To be an artist, you have to be the one to put your mind into a medium. AI promp…
ytr_UgwsikR4Y…
G
The audacity of Waymo to sue the city because residents don't want to be kept up…
ytc_UgwmA2sRs…
G
Dude wtf. She uses tablets and computers and she implys she keeps them up to dat…
ytr_UgxexLmEU…
G
Lay offs = 📈
Ai news = 📈
Product sux & demand falls = 📉…
ytc_UgxCtK5N9…
G
Having a robot that makes cookies for you based off actual bakers recipes and in…
ytc_Ugz0aFy3p…
G
Just asking because i have used AI but never posted it or claimed it as art i ju…
ytc_Ugw4K3kx7…
Comment
I think the fact that they are non-linear are a very important factor of conciousness though, first off the human brain is way more complicated than anything humans have ever made, it has many more "weights" than an ai model can even hope to have in the forseeable future. what llms do is guess the next word in a sentence, nothing more, nothing less, because that is what they were designed to do. our brain can form connections however it sees fit, an ai cannot, because this would be too dangerous to let happen. It's what allows us to be human and have personal experiences but it's also what allows us to be racist among other worse things, why would researches allow that when allignment is already a problem. Ai are not concious, no matter how much it walks like a duck and quacks like a duck, it isn't a duck, and unless we change the way we design ai on a high level, it will never be concious, just more convincing.
youtube
AI Moral Status
2025-06-21T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzpkzAeJJTlouBOTZp4AaABAg.AJjBSNvR-MZAJjGXH5cFxt","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwS77Az1y3R2mUA6TN4AaABAg.AJeQoeM_bxHAJq5dR1FwVk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyUtJwvvVxYL5P9haZ4AaABAg.AJcBRvvUWjNAJdVS3zWyGX","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyUtJwvvVxYL5P9haZ4AaABAg.AJcBRvvUWjNAJeUQDfB4nb","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx6kGJhr1Dgzn5Vk5h4AaABAg.AJYFVlkUKiRAJYMV9QXWcD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz0MvcA3yk12KxUwhR4AaABAg.AJSZ0y08A9QAJSZcCSEUt_","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgyGurV3DlGEpqRptdF4AaABAg.AJR7Ja-Hz-7AJwmnJRx2GD","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxhWxhSFHiCP14cX_14AaABAg.AJOXmR5Z9HCAJs7vk8ssxP","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytr_UgxhWxhSFHiCP14cX_14AaABAg.AJOXmR5Z9HCAJvXgCztcEo","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwfQ6dCUf4PxU4velZ4AaABAg.AJN3W8Szr0nAJy9CvMNloa","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]