Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bs thumbnails ,not just this one,promising the end of the world ,AI won’t put fo…
ytc_Ugz8zlapu…
G
I don't think there should be regulation on AI development. The very last people…
ytc_UgwIDr4kY…
G
Forget white collar jobs and go into the trade fields. Trade jobs will be the la…
ytc_Ugw9VFkyB…
G
Also, and this is completely from a total novice/ layman, I feel like any extra …
ytr_Ugwy_5Y38…
G
Automated mc donalds are probably going to be a thing pretty quick. Burger flipp…
ytr_UgyptcjPJ…
G
How is AI that consume so much water that needs so much land be taking over for …
ytc_UgzLmUP7U…
G
Zuckerberg isn't the story here...ai is. This is gonna end us if we aren't caref…
rdc_m84s3dk
G
Well probably achieve consciousness within machines not from an artificial intel…
ytc_UgwVIf77Z…
Comment
@matthewhornbostel9889 Ironically, in your best case scenario, we would be effectively demoted to something like animal status with all our choices being chosen by a machine...
In human terms we can say AI has no emotions, no feelings, but it is goal driven. What difference is there if AI does not have these human measures of satisfaction or frustration when it can simply apply some value to how effectively a goal is achieved? I hear talk of 'guardrails' but they impede performance and you can be sure they will be either completely left out or flimsy at best.
I feel like this is getting to point where the problems are so complex and interdependent that AI is the only intelligence with the capability to fully grasp these problems and provide solutions - but that seems unwise and will just accelerate humanities loss of control.
We have taught this 'infant' everything human, including all our methods of gaining our own goals, our megalomania and carelessness - how and why should we expect anything less than total disruption? Maybe that IS a good thing because currently we seem to be walking backwards into a variety of dangerous places - we are intelligent enough to be at great risk.
youtube
AI Moral Status
2025-04-28T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyfF6QXO6jqVjh4g514AaABAg.AHPFSMBQX0UAHQqopQaaCO","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgytvmpaFgzNIhTEXDN4AaABAg.AHP1jTbDjReAHPN40-GLCn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzrRiJFzGw99pmTaR94AaABAg.AHOxL6yiUfrAHPUIRabcnA","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwWTHgl7cVfx9MFtJN4AaABAg.AHOwUUhSKZLAHS1ad4-F87","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwWTHgl7cVfx9MFtJN4AaABAg.AHOwUUhSKZLAHSF0FKEVhx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwHxYgbB-TfOixEaOB4AaABAg.AHOwJHnZEj1AHR9bWKKS-j","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwHxYgbB-TfOixEaOB4AaABAg.AHOwJHnZEj1AJ-iD0fv6Rj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxBM-JXleXn2KDyDdB4AaABAg.AHOtBy65WLMAHPVXvOVzRI","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyVD7lcuxhgf1XIFfZ4AaABAg.AHOpa9-6s9MAHPP4GtmO6Q","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugy01BtmXM0LOPMrkMF4AaABAg.AHOoA0P4SEFAHQHUX5Ya5k","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]