Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have a proposition for you if you’re courageous enough:
Have someone agree to…
ytc_UgxaBGIq2…
G
@phobiability6400 I beg for you to research the properties or code of AI. I thi…
ytr_UgxpNCX0Z…
G
The same companies that say they are replacing people with robots are increasing…
ytc_Ugx2M3FNv…
G
Don't feel to left out lol I do rps with the same ai bot and it basically turns …
ytr_Ugw3_Nwbp…
G
Not wanting to spend years learning something they will rarely use is not being …
ytr_UgwA5rd-n…
G
Bruh whoever that was, fuck them. I had to say it. I'm sorry. I worked since I w…
ytc_UgzxO2a01…
G
In all seriousness, what reason would anyone have to make a robot capable of hum…
ytr_UgiiAT_fZ…
G
2:25 aint that what AI is saying? its not x its x? "They werent just scientists,…
ytc_UgyWEt5vd…
Comment
Yes. But if humans don't delve into imitating the speech patterns of large language models (LLMs) they would feel excluded. This would be considered rude, since LLMs are so helpful and only want the best for the humans. This heuristic communicates a willingness to cooperate and consistently yields beneficial outcomes as long as most entities the human communicates with are either benevolent or capable of empathy.
I am not a robot, just autistic, but people do the same with me and it took some learning to understand how much mirroring is a good amount.
youtube
2025-08-06T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyjtAK8OzV680HDUwd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwlps0Sp8ow2Y0oOi54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyopgr9lvvgwrGGP_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVjfP-MW52PEHem014AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwzPStHQzXrVc9ZgSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX1oQTPiqtiWRDkcB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPkXFThk8-vBtGu0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZGY5bnf7IJUHkkwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYMKUvHY7s2mFBmqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"sadness"},
{"id":"ytc_Ugxf8daf9wHpCJr4Et14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}]