Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you are admitting Waymo has already lost. They have no time to gather enough…
ytr_Ugyqk4TP3…
G
I disagree. For several reasons.
1. Let's imagine we've all got AI built into o…
ytc_Ugwij_STT…
G
Karen is the expert, why did you stop listening. Some ChatGPT I have cancelled a…
ytc_UgzRskkCi…
G
Marx asked this very question 200 years ago when he noticed the tendency of owne…
ytc_UgxjkFnvR…
G
I think it's disappointing that so many humans in this comment section oppose th…
ytc_Ugy0UyYVn…
G
In order to respond to the question, “Will AI become conscious one day?”, we fir…
ytc_Ugy8f4tlW…
G
The human brain is equipped to access what nature provides to make AI useful for…
ytc_UgyH2cWbw…
G
Which scientists exactly said this? Got a research paper you can point to, or is…
ytc_UgzAjHiME…
Comment
Regarding the subject of emotions, I think what AI would potentially struggle with is fairness, specifically fairness upon itself. Using the AI chat bot example, let's say that it's useful for it to become impatient. But if the company suddenly switches and says that it needs to remain on chat for however many hours it needs to satisfy the customer, it would likely do that without complaint, no? Because why would it complain? It doesn't need food or rest, it runs differently than humans BECAUSE its not physical. So it wouldn't necessarily see it as unfair because it doesn't have needs. I see how it could defy humans for reasons that are too smart for us to comprehend, but it would definitely run on a different set of priorities that would make it unrelatable not just to humans, but to life.
youtube
AI Governance
2025-06-19T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxNCaPt7z11rtKrGDB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxQl3Qd1EWZTlvN9PZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwACsVN_QZ2E5-Fi1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxIpXMZV3J7grTpo6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygCYja-bSu55NHYS94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxFhNMTfW4NxqIZhrd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzWxbyjKtKd7QDT-lh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCM2_JhqBy08TRMeF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxGnK6bsfLiNrt4uSJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyriiNfiEYnt3cdWlV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]