Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@josehumdinger6872 AI is not "studying" anything. It's not actually intelligent,…
ytr_UgxMEllv5…
G
self learning AI pulls data from the internet so if there's more of one kind of …
ytc_Ugw9d0C7X…
G
Sounds nice, but heads up that having certain “intentions” is completely arbitra…
ytr_UgzO2d7Hb…
G
Interestingly, this is already how some jurisdictions work: fictitious CP is not…
rdc_lu5y9vx
G
No matter what you do with your life, remember this, Jesus Christ loves you so m…
ytc_Ugz4tzJYM…
G
@TheDemonicMushroom re 'Ai art is bad, period. If it didn’t take art from real a…
ytr_UgyLCtoo1…
G
Well done Mr Sanders, you make very relevant points and propose good solutions. …
ytc_UgzAzF0UP…
G
And then there are still people keep saying AI is here to make our life better, …
ytc_UgzbdQ8YO…
Comment
This is only the beginning they have just created Parallel ai about a week ago essentially its an ai that checks on its work kinda like how Grok 4 Super works with multiple agents an thats just one of the discoveries there are countless others being made all the time now we have come insanely far in just a few years with massive improvements an there updating these models literally everyday an there constantly improving an also people just don't know how to prompt correctly with the model there using in most cases. Its only a matter of time its better then us at everything an if your gunna compare it to the average human it already is.
youtube
AI Responsibility
2025-09-30T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz3RtMUS13Svy7i5914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXka--GLJledX9ESJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw9_6X8jBKJ9hhsbot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxZhHC7oGILxhFjUHZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxrP0ngyP8YpoSCmCZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwx885ysf7Vl5-YP6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQnakXn0mhFkHZBlp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDkJdpdMuLZLstAwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyFDR52JNmvepR4vWt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyF5KPaOLY3wsFbMBd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]