Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OH THESE POLICIES MAY END WHAT IS JUST THE BEGINNING! WITH AI, SURVEILLANCE THRO…
ytc_Ugz6Pcr5x…
G
>between a diagnosis and what Cigna considers “acceptable tests and procedure…
rdc_jtgdml8
G
In 6 months ai generating 90% of the code, that's absolute horshit. Just to make…
ytc_UgwSsAqOM…
G
Well said! People have their heads in the sand (or somewhere else) about this. W…
ytc_UgyXtT2-A…
G
This CEO looks like a pedophile. Man created an AI. An artificial intelligence. …
ytc_Ugz1aigq7…
G
i only use AI for copyright free references at best
some times or brain stormin…
ytc_UgwdC8nH-…
G
Schools will be closed and students sent home. No more behavior problems, buildi…
ytc_Ugz6yubb2…
G
Yes. Yes, we should ban all future progress in AI, since we don't know which ne…
ytc_UgwH2NhOf…
Comment
What I now see as the ultimate result of ALL A.I. functionality is simply MISTRUST. These will all ONLY serve to make humans finally distrust ALL digital things. However, it will also cause us to EVEN FURTHER distrust each other. THIS... will be the downfall of humanity. At least we'll all fall to the point of abandoning all tech, which may ultimately cause human population to fall to severely low levels. Because the people who DON'T learn to mistrust A.I. and others will be reduced to use-LESS idiots. There are A LOT of idiots already. I've been calling these people "non-player characters" for years. It's only getting worse. People don't know HOW to disconnect. Even I am struggling more and more. But, I'm trying to learn about meditation and practice it. I want to only INCREASE my number and depth of my meditations and TRY to ground myself in the wholeness of the universe. That's all I can think to do. PERHAPS it can help me and many others have super-deep epiphanies, which MIGHT set us free from all this nonsense.
youtube
AI Moral Status
2025-06-07T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzD9nhLxlrHoGCU8Zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwCJGzZl3JrCeLXDKt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwgVwgesAM005ZG3iZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy55H6aaTel_tXuPpV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9EfWtH8M2jq2pzld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUSN3Fr37QUSFm8Zp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1aDqDmASLrAvsf6R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxJ9V2OBtQEbauWukZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxTYqN6AmQVv5wEFbR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxHz8FP2FuALKqOZd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]