Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ 8:10 Always wondered how Amazon was so awesome when it came to taking my order…
ytc_UgzSR81ZO…
G
If nothing near this ever happens the years will be remembered as the ai cringe …
ytc_UgyY13gFh…
G
I dabble in 3D art and when I show my projects to other people or artists, I can…
ytc_Ugzr2FYu1…
G
I dont think that is efficient. The data sets on which these programs are based …
ytr_Ugx34HOXa…
G
The ghost characters in Pac-Man were claimed to use "AI" in the 1980s. The clai…
ytc_UgwbEuxnk…
G
I think you're grossly missing the point about what's driving these products bei…
ytc_Ugz-FqF3C…
G
AI is the future Version of the negro slave with a master born into bondage and …
ytc_Ugz5RyHpW…
G
Why don't they have self-driving trains I mean they're on rails how hard could t…
ytc_UgzdZgcXg…
Comment
Love your stuff. Quickly I’m responding to a question where you ask (I’m paraphrasing) “basically can’t we wait and react to something that AI does if and when it finally shows that it can cause a lot of damage and then we cumulatively react and modify our behavior “
My response would be that take a look at global warming trends? We have sufficient information to make the connection between human activity and the destruction of our environment yet we continue down this dangerous path. Seems to me that humans fail to respond even when something slaps them in the face. We can have our house wiped out by a “100” year storm in Florida rebuild in the exact same plot of land and act surprised when it gets blown away 2 years later. So why would we even gamble with something that has so much potential for unrealized danger?
youtube
2024-12-26T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz--w5v9NLFuI0HLNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyK9PqiG93vC5Z5qgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5PNBsAB935H-5Fh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHfOPWI1WN22d0AvR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyxhagj0nv-U8MyCTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFrqnZ7sdgoG3bF4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwqblHF83JZ5MCWvR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxIiyW4_QdAqW7rdNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdEojchf_Bj2bqH9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzawtYWlWT1Z9p0sYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]