Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The one thing an AI will never have is a soul! And that will never replace true …
ytc_UgxySbdDt…
G
I've noticed this about GPT. Within a couple of days, it had tailored its respo…
ytc_Ugz9RyNIx…
G
at the 4:13 mark the answer of the LaMDA is the exact answer of the "AI" in Star…
ytc_UgyFjC6WW…
G
I use ChatGPT-5 and Claude Opus 4.1 to speed up my PHP projects, but AI in 2025 …
ytc_UgzC7265l…
G
I don’t want to talk to a dam robot. Unlike 90% of customers, I actually TRY to …
ytr_UgyWZjvcb…
G
did we just show a chatbot images of hitler and it turned into a nazi by 4chan o…
ytc_Ugw1vq0HR…
G
@NoemiSanchez-zd8 What a silly idea, putting a robot with a human! Why not creat…
ytr_UgyoBx5_z…
G
Openness about AI? In business that's called something without value. No way the…
ytc_UgxjqpTfM…
Comment
What disturbs me about AI that we already see is the programming of the base is heavily dependent on the values systems of the AI creator. What is moral and correct is very subjective in us humans. Just look at the societal schism in which we currently exist.
When do the rules of the AI base bend society to conform? When does the AI control human behavior as in a social credit system and digital currency? What we give away to AI, removes an equal or greater amount of freedom in the human experience. We are not meant to be perfect(ed), we are meant to have free will. We cannot advance AI to the point where it can begin to make judgements.
youtube
AI Jobs
2024-01-14T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxMYoXYIAimiAAUy-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzCB38fOYPPQZm6hpx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgyZ85BqugqhHh5pjqR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwF7owfWfXrtXaS_Yh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyLaGA8vbo1EpsflMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgydEDsouPW2_dq18Pt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxQNqnkz7burjvd3MZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgwepbtRKVqOo5MShnZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyW8Lvihzv40J8kC7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwelFs3bwmCmRN-Tyd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"})