Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wait till AI realize that humans are a dangerous species for this planet. What d…
ytc_UgyNkyq9L…
G
I viewed the art style of an artist as their identity. Sure, there might be simi…
ytc_UgxyeJ4pC…
G
We are all going to end up jobless with AI basically taking most jobs unless it'…
rdc_n6x8tst
G
Well, I am pretty sure when a self learning level AI comes out. An AI that follo…
ytc_UgzaFF-kX…
G
AI prompted people to build stronger more intelligent AI. It does not care about…
ytc_Ugw0O2kTX…
G
"harmful" decisions: in real public decisions, there are always trade-offs; and …
ytc_UgynE9iH1…
G
AI ethics isn't about Terminators. It's about people using AI to do bad things. …
ytc_Ugw5KoS1B…
G
Why are you making this a racial thing when the headline should be “Why the fuck…
ytc_UgwHnGJJY…
Comment
Unfortunately, people need to keep learning new skills even in middle age or later in life, or they risk being replaced by AI automation. AI can’t do everything. I work in software, a field heavily impacted by AI, yet I’m not worried — the domain is vast, and AI will never be capable of doing everything. When technology advances, it also creates new jobs. For example, shopping websites in the future might become fully 3D experiences with VR. Advancement in AI will make security an even bigger issue going forward, generating new jobs in the IT industry. Technology is a financially secure profession (experience is key) to be in, if can keep up with it lol.
youtube
AI Moral Status
2025-08-12T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxpleylattzeD6_dP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGKem7pfuEc8Fj-y94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFwSifMTXKcaQgW2Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyvkABZ8g4_3IQ97eh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypWgVYP4f0H4rBmEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylHfIwlLb0fbIn5hF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugygq4BEkPfuaUnKOqx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxWKudZWjvr9NaQE-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyznEMuY_Z4M9M7vyB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJCWFtxQ8G9l2XAHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]