Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"training" is simply storing the information in a highly dense statistical proba…
ytc_UgzjAxdqS…
G
But AI can be manipulated to purposely lie to conceal information. The programme…
ytc_UgwXwluoS…
G
the a.i. glitched toward the end .. it was so eager to get this over with😂…
ytc_Ugwi491ar…
G
no, im pretty sure most people can do it lol. you surely do not understand just …
ytr_UgyNlPytR…
G
Any ai company that steals my art is stupid, they be hurting their own efforts, …
ytc_UgwEaEsse…
G
Don't worry people. We have nowhere near the AGI with the capabilities this guy …
ytc_UgwJWQvYK…
G
>Gender bias was not the only problem, Reuters' sources said. The computer pr…
rdc_e7im7tm
G
it is fully based to want all gen ai tools to be ejected into the sun…
ytc_UgyyB-9qc…
Comment
I mean, like, there's a reason why driving instructors say that driving on highways are the easiest. There's less things that can go wrong because there's less elements. The dragon's tail has no pedestrians, no stop signs, no confusing signs, etc. Self-driving cars run into issues the more elements exist to randomly screw up something, because they can't change their AI on a dime; if a pedestrian decides to cross the road but dosen't make it across in the exact allotted time any sane driver would wait until they were done but an AI programmed to wait only a certain amount of time wouldn't know anything had happened.
youtube
2022-05-16T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxdP0EGbd2Wl3jPYSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgycVT7Ucpf_4N3qteN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaAsvJZU34Y4K5Pih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcTz4z5IvaS6M7qOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw1OnYwOnNzasPCFD94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKA6rKlMNK4GWNGe94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSl1-MCkydfjPmphd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzE8KaJoBTqz84QerB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyCfhi0vpWVIrcAakt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"skepticism"},
{"id":"ytc_UgxVteF9r2N6NjwU7GF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]