Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the only kind of idiot that would argue doing art digitally is the same as gener…
ytc_UgyJARda5…
G
So basically,
Chat GPT and Grok are purely statistical and by the numbers. If …
ytc_Ugwr3AzDN…
G
Figures that Google would create AI and right away try and brainwash it against …
ytc_UgzA6cK1x…
G
Ya the automated caller will keep u running from pillars to pillars and never sa…
ytc_Ugwn17jc8…
G
AI was made using other peoples work though. Its training set is literally compr…
ytr_UgzeeEdpT…
G
I don't know.
If I am an insurance broker and I want to write the bill, I will …
ytc_UgznbAJsD…
G
I have written over four million words over this past half decade on the existen…
ytc_UgwkdK90e…
G
Yeah but when you take away the supply of jobs by automating them- say, taking a…
ytc_UgxGyjFdj…
Comment
There is a new architecture that is not a silver bullet, but it is okay for a different set of problems. It's not based on LLMs, but rather something else. It is a completely different idea based on stochastics. I know, i developed a prototype in 2022. It works albeit it needs a lot of research.
youtube
AI Responsibility
2025-11-22T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBaFgxNGd9xVDx6jl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwjYiNcKwF3YL_npIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzz45IVFqYabVsOfet4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzIyDnOmzgPCiKgqod4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrHe5s1BS12R97yqZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTm35G54OzZeyua3Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyBIeo4W6Nc20GZdhd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPhfLmHSJhgsl2mwp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwJGtBQ5VpgiRKhJSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyEBi2Bs0YivanUP-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]