Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And we would be applying these to fully functional robotic bodies... "Nah, im ok…
ytc_UgyYnk0TY…
G
Questioning NYCC's claim they couldn't remove him from the con due to contracts.…
ytc_UgyQaN2Vy…
G
This case is so insane to me. Absolutely not a suicide. The cops just walked in …
ytc_UgxWIy-g3…
G
AI isn't "good" it takes from real artists and mashes existing work together.
Ju…
ytr_UgztP1uBC…
G
Then ur chatGPT did not read the last book of old Testament, last 2 verses, befo…
ytr_Ugzc-MPyn…
G
Meh, They didn't read the terms of policies of OpenAI, OpenAI can use your
chats…
ytc_UgxeA-nrx…
G
So what about all the employees that Lyft and Uber currently have that are out t…
rdc_cylw1gz
G
I think theres a lot of people a bit insecure about this. AI doesnt stop you cr…
ytc_UgwEq3GI4…
Comment
It really depends on the data you train AI models. If your data is biased, then you'll end up with biased models. Also, neural networks that are still used primarily in AI often make unsensible connections that lead to these AI "delusions." In other words, these models don't have the same understanding as we do. Similar to computers adding, computers only follow algorithms, or steps, but do not know what adding means. I'd also like to add, AI is not that profitable. Statistically, most companies implementing AI do not see significant increase in profit, implying that AI is not that advanced, yet. It still has many flaws, which researchers believe will take many more years to fix.
youtube
AI Moral Status
2026-01-17T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgySbGEGeE5Iy_Lq1VN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJ8ri_WPbJPc1-9Wx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyeRw1uc9vRHer5gV14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgypkMSN5BaCFhUgS3h4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVSAWRsBAajCxT-iN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzwwvk0l5BLzxJXsnd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw80nUbOZ-SqPEtJXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8PJG2B_Tkdmrpc_B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsc4NykKFnOOsNdSR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgwBAvaYioS1dBoSJBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}
]