Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some people are concerned that AI, especially 'strong' AGI, will become sentient…
ytc_UgwbkEMK2…
G
Those people who try to use the disabled to justify AI being around remind me a …
ytc_UgyXZda8v…
G
Unethical and immoral: Facial recognition technology spacing, is what is being …
ytc_UgxLArEOZ…
G
CA maybe is fine. But living in the Republic states like PA and CT total the spe…
ytc_UgwZY-AWT…
G
How can he say that but put the ai in his robot omg so scary…
ytc_UgwHmmCRF…
G
The issue is the AI doesn't need to practice its art and it charges pennies comp…
ytc_UgwOpiPaa…
G
unable to read a room, build trust over a coffee, or handle the one thing AI can…
rdc_ohy59eu
G
Robots will teach us? In what way? If there is someone that will teach humans it…
ytc_UgyR_RsQw…
Comment
Just remember Large Language Models (to be very rigorous, autoregressive LLM) are statistical representation of the language: they are trained to guess the next word that would complete an input text (based on probabilities), and iterate the process until the generation of a full answer (this iteration is the "autoregressive" part). So the completion (= the response) depends a lot of the prompt (= the question) and the history of the conversation. Being polite generates a mannerly conversation, being funny generates funny completions. And the "secret" for a good conversation with a AR-LLM: keep it focused on a topic.
youtube
AI Moral Status
2025-05-01T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyFKMd2voqhOIfmPg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXUHOrwdqQ9TdxkBp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzVy32mkoVrX2b2lE94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDaVCivjQJFwRWvDF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxQ7SZGVevNjEKTHxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxvtS2h6LBG8CS4pv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVpagq-KcFW8Lqah14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgysGp6eUGZAY3oM4VN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwsPI6uNYq0U3lMzud4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwiDLYqLjdI_m-kXIp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]