Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't see how this robot can see
Where are the caneras on this thing?…
ytc_UgyW4YeaU…
G
@callyral yeah but then they're gonna go onnnn and onnn about how "wellll ermmm …
ytr_UgzmF9Vx1…
G
Haha, that's a clever one! Sophia's playful banter about wisdom definitely shows…
ytr_Ugw8oPiUj…
G
Bittensor $TAO solves all issues of centralized Ai. Study it and thank me later …
ytr_Ugw7NGojg…
G
@rudyschwab7709 Which means we are a threat. Not machine learning. We have ne…
ytr_Ugybc7Rz_…
G
the only way ai will actually be useful is if we can simulate our universe and l…
ytc_UgxYJenIO…
G
It's almost like reneging on reasonable deals made by previous administrations a…
rdc_e2wgovd
G
The truth is it should be illegal to have any kind of AI assist without at least…
ytc_UgzOxnlXk…
Comment
Ok but like, who is teaching chatgpt morality and ethics? When ai starts to become a bigger part of our world, how can we be sure it acts in the interests of humanity? This is actually a serious question.
When Tucker Carlson (uck, I know, I hate him too) asked Sam Altman (ceo of OpenAI) how they were teaching ChatGPT morality, he vaguely said "we consulted a group of moral philosophers", then when asked who those were and what their particular beliefs were, Sam said "well, I dont want to dox my team" and kept skirting around the issue. Highly recommend you watch that interview, it gave me chills.
youtube
2025-10-30T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDcp9ifDnNDO9abUN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgynCP0SHYfnUz3BMOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzz2eIoodn620SdekZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz2IzjF9BpwCpVhEdR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfHQ2rhqhBqz5oB-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxMDX4BRYHYl3x4_FZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgziK-ZsfOTWv8vZCHN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZOpOP90FBozU8Oeh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3twJucw_2A8iPEE14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJvvZBzEGbFZ_0Bwd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]