Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tesla stopped putting radar/lidar in their cars and are only relying on video pr…
ytc_UgzvehiUb…
G
In the 1920's there was a study done that included scientists and engineers invo…
ytc_Ugxc09Y79…
G
It's part of the cultish mindset some of these people have. They genuinely thin…
ytr_UgxKZDYBN…
G
stuff like this only proves to me that my life does not matter and that i should…
ytc_UgxcbSVge…
G
The problem is these companies have spent too much money on AI to admit it doesn…
ytc_UgyoPmUvs…
G
The value of image generation is so over hyped. Yes, it creates absurdly realist…
ytr_UgxtL8QSs…
G
The thing I do Respect from Naomi AI Company is they don't Treat Adults like chi…
ytc_UgwEV7zpV…
G
Or or, the poor and middle class need to stop hopping on the AI hype train.
Thi…
ytc_UgwzODwz9…
Comment
The first question I would ask: what does being "objective" really mean?
Say- if we were to ask the AI to grade a whole student body, a school for example- on a piece of creative writing- how would the grades received by the students from the AI, correlate with the grades between different teachers. What would that tell us about the AI and the teachers?
How would we make an optimal "marking" machine, while improving ourselves by using it?
How do we even begin to decide what a "statistical anomaly is", when the question is no longer- concensus, but optimization.
The obvious answer is trial and error, but the thing you are playing with here- education, is not unimportant. In fact, if the "error" part goes really wrong, that's a recipe for apocalypse. Terrible education and unsustainable beliefs are the most likely thing that might end our existence. That's my opinion though. I just don't want to see us fail.
youtube
2023-06-24T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz5WXaXziw05GUjU9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvG5HCRRkDlOodkpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKxS4ZfsEy3hn4tO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz62lgq0rNZwmoRmCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzbtukhe2ba8bb7PfN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRwpCXIFXkHr9hqxF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGbXpeympB340d-RR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTytVoOxTf29rg9cV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwEcy1JBN5T9vN8kWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHyyZUvjvI3wGisz94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]