Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In order to respond to the question, “Will AI become conscious one day?”, we fir…
ytc_Ugy8f4tlW…
G
why would you give a robot the feeling of pain pretty stupid pain is a weekness…
ytc_Ugipm9QoH…
G
To AI developers: start fingerprinting all of your products. This will permit …
ytc_UgwvFiYFO…
G
This is the only case of a robot killing a human as of Feb 25th 2025 11:57 Yekat…
ytc_UgwobAd8B…
G
Charles makes an excellent point about the "creative" process utilized by the AI…
ytc_UgweSiwfe…
G
The real issue is not Ai it’s people, probably we need to reconsider humanity…aw…
ytc_Ugx4gy5-k…
G
Do some research on the heat list mf. That ai worked fucking correctly if the ma…
ytc_UgyGSVJ60…
G
the entry level job changes. you're no longer a factory worker, you're a line l…
ytr_UgyB_Ede1…
Comment
Hank has become too doom and gloom on this topic, sad to see. Of course there are smart people who are anti-AI; but there are smart people with valid points on the other side too.
At the start he noted that we’ve been using AI forever, such as Netflix recommendations - which he says is for profit and not for thriving, but that’s incorrect. Netflix is an entertainment company, so they profit when you are more interested in their offerings; that is your thriving.
youtube
AI Moral Status
2025-10-31T10:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFDi0lUCA03prWkHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxGs8eJ4nVYIPbyk2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwcsohYQBBcdz8LU6d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHjZiPiE1gAOATu4N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTPPbocK63MseP8NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZqz0ef3MZGw9DcpB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyP4cRkKxcM1eWC6K14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxNg2ODI1m6JKw4KmF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw3oilAS147xH0xeCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy-hHVeQJVTi9v40h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]