Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With the increasing availability of the cheap, destructive capabilities of drone…
ytc_UgzUj7Jf5…
G
The simplest thing that did not get discussed is that - it is not as if we are t…
ytc_UgzcbFmhg…
G
i thought i was the only one using parent scenarios cuz mine is shit. i bully th…
ytc_UgxmEv-wN…
G
Not much we can do to stop it. These mega corporationss have the power of govern…
rdc_d0fir59
G
Aldous Huxley really was 100 years ahead of his time. AI, mass information and s…
ytc_UgxJC0xF2…
G
In fairness very few humans have emotions unless faced with their own mortality …
ytr_UgxNeMogG…
G
Would be fun to buy up a few of these and send AI slop up into it. Just pack the…
ytc_UgyR0l4vo…
G
I wonder, if someone posted a deepfake of Jill Biden and Obama old Joe just mig…
ytc_UgzVEzMjv…
Comment
This is scary as shit because ai does not know when it’s right & when it’s wrong. Assuming it doesn’t filter, which it does, it strives for the best answer be it right or wrong. That philosophy is good for google search, but when serious shit is in play, it’s unacceptable.
Amended: if it truly is producing % probability which I hope they would, what is it based on?
Trust but verify.
youtube
2025-05-10T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyjsT3QahyrgJxrr3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9_Ldg2xcX3GtPTj94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGaIl7oSWy6kY3dLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzbDokD7A7PvTJddZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZVqFlfttoy2TteR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8P0G5C3UQP3veI-x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgywhRKHxiM5huFHokx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzn4IPj9mCuPfQH9AF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlmXL0LcNWzuy49Nh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"skepticism"},
{"id":"ytc_UgwVaBX_KkauEjaJki94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]